Skip to content

  • December 8, 1980

    It was probably almost midnight, maybe later, in a club called the Blarney Stone, near Centenary College on Youree Drive in Shreveport, when we got the word.

    I don’t know whether there was a general announcement—in my mind someone heard it on the radio and whispered it in the ear of one of the guys in the band between songs. I told the story that way for a while, until the bass player corrected me, someone in the kitchen in the back was watching Monday Night Football and got the word from Howard Cosell.

    Anyway, my bandmate—I don’t remember which one—took the microphone and told us John Lennon was dead.

     Pete, the piano player,  sat down at his Fender-Rhodes and pecked out a solemn, wordless version of “Imagine.” Then we left.

    We did not go home. Some of us went to the Freeman-Harris Cafe, a soul-food place in an area called either the Bottoms or Ledbetter Heights (depending on whether or not you got the Chamber of Commerce memo) where James Brown and B.B. King and Prince would eat when they were in town. We got a bottle of Jack Daniels  and a plate of fried chicken livers.

    We got drunk that night and told ourselves it was because some unworthy little man — some Lee Harvey Oswald-looking motherfucker—had murdered a person who was important to us. 

    Already we were calling it an assassination, an ideologically inspired assault. Lennon was a martyr, not a victim, and we were the bathetically bereft. We flattered ourselves by imagining he was one of us, that we were like him. Someone organized a vigil, there were candles, there was weeping. There were all sorts of posturing and wallowing and bingeing on sentiment. We were sad, no doubt about that, but there was something self-aggrandizing about the way we exploited the occasion of his death. We were so sensitive and dashing with our moist eyes and broken hearts.

    Every generation needs its pick-up saints and tragic ballads; John Lennon was a guitar player and a songwriter, a singer in a rock ’n’ roll band. He was talented and uncommonly intelligent, and the facts of his execution secured his place among the tragically slain famous young. He wasn’t so fascinating that we couldn’t make him stand for whatever we wanted.

    We still do.

    OH YOKO

    A crazy young man named Mark David Chapman shot Lennon outside his 72nd Street apartment building across from Central Park on Manhattan’s Upper West Side. He shot him maybe because he wanted to become him, because he admired him, because he thought that by shooting Lennon five times in the back he might transcend the dull pain of the everyday.

    “Then this morning I went to the bookstore and bought The Catcher in the Rye,” Chapman told the police three hours after he shot Lennon. “I’m sure the large part of me is Holden Caulfield, who is the main person in the book. The small part of me must be the devil.

    “I went to the building. It’s called the Dakota. I stayed there until he came out and asked him to sign my album. At that point my big part won and I wanted to go back to my hotel, but I couldn’t. I waited until he came back. He came in a car. Yoko walked past first and I said hello, I didn’t want to hurt her.”

    Lennon signed that album, a copy of the then-newly released Double Fantasy. Chapman stuck it in a flower planter by the Dakota’s front gate while he waited for Lennon to return. After the murder, another Lennon fan retrieved it and turned it over to police. They returned it to the fan after Chapman was sentenced, along with a letter of gratitude from the district attorney. A few years ago, the album went up for sale—the selling agent commented on Chapman’s “forensically enhanced fingerprints” visible on the cover.

    “Then John came and looked at me and printed me. I took the gun from my coat pocket and fired at him. I can’t believe I could do that. I just stood there clutching the book. I didn’t want to run away. I don’t know what happened to the gun.”

    The police had the gun. Yoko took John’s bloody glasses and put them on the cover of her album, Season of Glass. You can go to Cleveland and see them in the Rock and Roll Hall of Fame. Some people think that’s ghastly, but Yoko Ono has an answer for them.

    “John would have approved and I will explain why,” she has written. “I wanted the whole world to be reminded of what happened. People are offended by the glasses and the blood? The glasses are a tiny part of what happened. If people can’t stomach the glasses, I’m sorry. There was a dead body. There was blood. His whole body was bloody … That’s the reality … He was killed. People are offended by the glasses and the blood? John had to stomach a lot more.”

    WORKING-CLASS HERO

    John Lennon wasn’t what we imagined, what we made of him in our heads. He was flesh and blood and bone; a painfully thin forty-year-old man who had been through a lot. He was very rich and probably passably happy. He had a family, he had re-entered public life. He seemed less shrill, less angry than he had a few years before. The tragedy of John Lennon—which may not be the whole truth, but seems a reasonable guess — is that he was in a pretty good frame of mind when he was killed. He might have been poised for a major comeback. He might have made a lot more good music.

    I didn’t care much for Double Fantasy, in part because it was a gentler, kinder album than I expected from John Lennon. I liked my Lennon nasty, taunting—the Lennon of “How Do You Sleep?” as opposed to the Lennon of “Watching the Wheels” or “Beautiful Boy.” I prefer the primal-scream Lennon who didn’t “believe in Beatles” to the domesticated cat who sang about starting over.

    But then it’s been nearly fifity years since Lennon was murdered, far longer than he knew Paul McCartney. The further away we get from these events, the more compressed they seem — the Beatles were a moment, not an epoch.

    None of the Beatles escaped mediocrity in their solo work. Perhaps that was inevitable. You can only burn so hot so long, and in retrospect you can see the strain starting to show as early as Sgt. Pepper’s Lonely Hearts Club Band and certainly in the albums that followed it.

    These days, you can get by on the collateral Beatles you encounter; “Penny Lane” bleeds from a passing speaker and you smile from the contact high. You don’t need to hunker in the dark with headphones, listening for the point where the chicken at the end of “Good Morning Good Morning” turns into a guitar. If you are of a certain age and inclination, you have probably assimilated the entire Beatles’ catalog. You might not have any trouble describing John Lennon as one of your heroes.

    I don’t know that there’s anything wrong with that other than my sneaking suspicion that Lennon himself might not approve. He might be flattered, but so much of his work seems to be aimed at the demystification of idols. Lennon was nothing if not a reflexive iconoclast. He could be ruthlessly careerist—all the Beatles were—but it’s difficult to imagine him buying into the idea that he was essentially more than a Liverpool lad who had a run of luck and a bit of talent.

    “I saw him as a cardboard cutout on an album cover,” Chapman said at his parole hearing in 2003. “I was very young and stupid, and you get caught up in the media and the records and the music. And now I — I’ve come to grips with the fact that John Lennon was a person. This has nothing to do with being a Beatle or a celebrity or famous. He was breathing, and I knocked him right off his feet, and I don’t feel because of that I have any right to be standing on my feet here, you know, asking for anything. I don’t have a leg to stand on because I took his right out from under him, and he bled to death. And I’m sorry that ever occurred.”

    Chapman effected the apotheosis of John Lennon. We’re sorry that ever occurred.

    December 8, 2024
    beatles, john-lennon, music, the-beatles, yoko-ono

  • Bereft

    November 26, 2024

  • Safe at Home


    Fittingly, the earliest movie I recall is 1962’s Safe at Home!, a baseball quickie made in the wake of the historic 1961 season in which the New York Yankees’ Roger Maris hit sixty-one home runs to break the single-season record that had been held by Babe Ruth since 1927.

    The movie stars Maris and teammates Mickey Mantle (who had also been part of that historic home run chase, ending the season with fifty-four dingers) and Whitey Ford, the Yankees’ best starting pitcher. It was filmed during the New York Yankees’ spring training in Fort Lauderdale, Florida, in early 1962 while Mantle, Maris and Ford were actively participating in spring training. While I’ve never been able to ascertain specific details about the the exact amount of time they dedicated to the film, I have heard that they spent part of two days filming their parts. Some online commentators have noted that the movie feels like a publicity effort from the Yankees, and that’s probably right.

    While the film is not hard to find, I’ve never re-watched it. I thought about doing it as part of my research for this piece, but I decided against it on a couple of counts. As it stands, I have fond feelings about the movie and all those who participated in it. If I watched it again I do not imagine I’d feel any better about the movie. There is a chance that I would be depressed by it, that I would see it as the cynical exercise in profit-taking it no doubt was. Besides the movie was not meant to sustain the scrutiny of a 21st century film critic, it was meant as a matinee diversion for little boys. 

    I don’t remember much about the film but have read plot summaries of it. A young baseball-obsessed boy attends a Yankees spring training game in Florida and comes home to lie to his classmates and Little League teammates about his widower father being friends with the players.

    They don’t believe him (they shouldn’t, because he’s lying), so he tries to arrange for the big leaguers to come to a school banquet. Mantle and Maris ultimately refuse to go along with the sham and impart a life lesson about the importance of honesty. I’m sure this lesson was lost on me, I was at an age when I conflated Elvis Presley when my Uncle Roy (both of them had served in the U.S. Army) and believed electricity was a living thing that might, like a snake, bite at you if you came too close.

    I thought Mickley Mantle was a family friend. 

    I imagine I saw  Safe at Home! sometime in the summer of 1962 (it was released in June) when I was  three years old. The newest research indicates that our earliest memories may begin when we are around thirty months old, which is about a year sooner than was generally thought. So I feel it’s quite possible I remember actually seeing the film rather than remember being told I was taken to see it. I have a sensory memory of black-and-white ballplayers going about their buisness on a big screen, of the faces of the idols looming. I think I remember my first movie.

    Still, let’s consider  what it means to see one’s first movie, at any age. While some of the stories about crowds panicking when brothers Auguste Marie and Louis Jean Lumiere first projected moving images onto a screen in a basement room beneath the Salon Indien du Grand Cafe in Paris in December 1895 are likely exaggerated, the paying customers were certainly perplexed and maybe unnerved by what they’d seen.

    (“Death will no longer be final,” the first movie critic wrote. He was right. We’ve gotten used to watching and listening to ghosts. Maybe half of the music I listen to in a given day was made by people who are no longer alive. )


    I imagine that I experienced Safe at Home! the way our subconscious might experience a dream—there was no sense of the unreality of it because I didn’t have the intellectual wherewithal to conceive of anything as unreal. Why would I assume the action wasn’t playing out in real time before my eyes; why would I assume these people were all pretending?

    I remember the vague feelings I came away with from my first movie far better than I remember the movie itself.

    In Search of Lost Time

    I can’t imagine many people have reason to care about Safe at Home! I’ve mentioned it in my newspaper columns a couple of times over the years and I’ve never had anyone send me a note to tell me they remember the film fondly (or at all). I’ve never read a lengthy review of it, so far as I know no Baby Boomer memoirist has ever made any sort of Proustian fuss about the movie the way John Updike and Philip Roth used the John R. Tunis novel The Kid from Tomkinsville in their novels.  

    (In 1985’s Roger’s Version, Updike used the novel to evoke a sense of nostalgia for a simpler, idealized past, representing a time when heroes like Tomkinsville’s protagonist, star pitcher-turned outfielder Roy Tucker, embodied qualities like perseverance, innocence, and small-town virtue. Roth, maybe partially in reaction, to his friend Updike, has his narrator, Nathan Zuckerman, reflected on the themes of unexpected adversity and the fragility of success explored in Tomkinsville, drawing parallels between Tucker and his own childhood sports idol, Seymour “Swede” Levov.) 

    The simple reason the film exists is because some people thought it a viable business venture. (It’s difficult to know whether it in fact was, for, at the time, comprehensive box office reporting was less common, especially for films that were not major studio releases or significant box office hits.) Mantle and Maris each got a reported $25,000 for appearing in the film, which seems like big bucks given the era. In 1962, the Yankees were paying Mantle somewhere between $90,000 and $100,000; while Maris might have been making $75,000, so the money was not insignificant.

    (It’s interesting to note that Mantle and Maris, along with their teammate Yogi Berra, had filmed a scene at Yankee Stadium alongside Doris Day and Cary Grant in the middle of the historic 1961 season for That Touch of Mink, which was released in June 1962. (Though they filmed the Touch of Mink scene before the Safe at Home! shoot they seem much more relaxed as peripheral celebrities in the company of grown-up Hollywood stars than they do as moral instructors.) 

    Mantle and Maris were—we now know—complicated men who were shaped by personal struggles and the pressures of their era, but  in their time they were presented as uncomplicated avatars of baseball greatness. The general public saw them not as nuanced human beings but as symbols—Mantle as the charming, all-American hero with a prodigious natural talent, and Maris as the quiet, workmanlike athlete whose perseverance broke a nearly sacred record. This sanitized presentation wasn’t unique to them; it reflected a broader cultural tendency to simplify public figures into archetypes, especially for the consumption of children.

    For children, this dissociation between a person’s humanity and their role is particularly pronounced. A child’s worldview is naturally compartmentalized, built on straightforward categories that make it easier to navigate a confusing world. Teachers, for instance, are often imagined to live at school because that is the only context in which children see them. Similarly, TV actors seem to exist only inside the TV, and baseball players seem to inhabit the ballpark, their personal lives invisible and irrelevant to the child’s understanding of their role.

    This compartmentalization is reinforced by the way icons like Mantle and Maris were portrayed in media. In Safe at Home! and countless other cultural products of the time, athletes were depicted as idealized figures—larger-than-life embodiments of honesty, determination, and athletic excellence. The film’s message about integrity (even as it featured players whose own lives were far more complicated) wasn’t just a moral lesson; it was a reinforcement of their roles as heroes, not humans. To a child watching Safe at Home!, Mantle and Maris weren’t individuals with personal struggles, family dynamics, or inner contradictions—they were baseball.

    As we grow older, of course, we begin to reconcile the person with the role. We learn that Mantle battled injuries and addiction, that Maris struggled under the weight of public scrutiny and fame. But in childhood, that distinction doesn’t exist. The ballpark was their home, just as the classroom was the teacher’s, or the TV set the actor’s. This separation of identity and role, while naive, allows a child to fully invest in the fantasy of their heroes, unburdened by the complexities of real life. It’s a kind of innocence that we only fully understand once it’s gone.

    It’s tempting to imagine a novel in the Updikian-Roth tradition—a richly layered, fictionalized account of the relationship between Mantle and Maris during the 1962 season. Their partnership, forged in the crucible of the 1961 home run race, seems ripe for literary exploration. Were they trying to be more than ballplayers, or was the simplicity of that role both a refuge and a prison? Such a novel might include a chapter on the filming of Safe at Home!, capturing the strange dissonance of two men stepping outside their well-worn identities as athletes to play at being moral guides in a movie that treated them as symbols rather than people.

    One can imagine the tension: Maris, ever the reticent and reserved one, perhaps wondering if this latest detour—like the endless press conferences and commercial appearances—was another small betrayal of his quieter aspirations. Mantle, more gregarious but no less burdened, might have viewed the experience with a mix of resignation and amusement, aware that he was playing a role on and off the screen. Both men, standing under the klieg lights of a makeshift spring training set, might have questioned what it meant to step outside the ballpark and into the realm of Hollywood. Did they see it as a chance to transcend the confines of their sport, or did they feel like reluctant participants in the ever-expanding spectacle of celebrity culture?

    In this imagined narrative, the making of Safe at Home! would serve as a microcosm of the broader dilemmas they faced: the tension between their public personas and private selves, the strain of living as avatars of a game that demanded not just physical greatness but a kind of moral simplicity. And yet, like all artifice, the film’s staged lessons about integrity would stand in stark contrast to the messiness of real life—their lives. Perhaps that’s the real drama: not what the film sought to teach, but what its stars revealed, knowingly or not, about the uneasy intersection of myth and reality.

    That’s the nature of art: It exists independently of those who collaborate to make it. Safe at Home! doesn’t matter much to the culture at large, to any overarching story we want to tell about our society or times. Yet there are probably thousands of kids like me who saw it at an impressionable time and on whom it had a significant impact. Everyone has a first movie, an arrangement of light and sound that ambushes us — a first time that can never come again.

    Cinematic Literacy

    Thinking about Reflecting on Safe at Home!, I’m struck by how much the experience of seeing it—as opposed to the film itself—shaped my sense of what movies could be. It was, in hindsight, an unremarkable film. Yet, for a child who saw the world as wondrous and immediate, it felt monumental. I suspect that’s the magic of early cinematic experiences: they teach us how to feel before we learn how to analyze. 

    We learned things from movies. Maybe not moral lessons, but how to smoke a cigarette and how to lean against a car. How to dress, how to court affection, how to be stoic—strong and silent—and how to imagine ourselves the hero of our own narratives. We were to a degree warped by movies, they had a certain gravitational pull, a certain profound allure. For better or worse there was something special about the movies. 

    I’m not sure that holds today. Cinematic literacy—the ability to read and appreciate films as art, storytelling, and cultural reflection—has diminished over the years. The rise of streaming platforms, algorithm-driven content, and fragmented attention spans has transformed how we watch, and perhaps more importantly, how we remember. While my generation stumbled wide-eyed into movie theaters, confronted by looming faces and larger-than-life stories, today’s first cinematic experiences are often mediated through smaller screens, broken into digestible fragments, or blended into the digital noise of other distractions. The kids are platform neutral—for a movie, any screen will do.

    Maybe Norma Desmond was right — the pictures have gotten small.

    Losing the Thread

    A few years ago I read an essay  Steven Whitty that appeared online at njarts.com a couple of weeks ago that bore the unwieldy but search engine-optimized headline “As opportunities to see old movies fade, so does basic cinematic literacy.”

    Whitty was the chief film critic at the Newark Star Ledger for more than twenty years and is a contemporary of mine. He began professionally writing about film in 1987 (I wrote my first film review in 1986) and reports that his first movie was Disney’s Pinocchio, which he remembers seeing when he was three years old. (Pinocchio was first released in 1940 but it was re-released in 1962, which is probably when Whitty saw it.) He’s one of the critics I regularly read, and I’m grateful that  social media platforms for making that possible.

    In this particular essay, Whitty laments a recent poll he’d seen for “the fifty best romantic comedies in movie history.” It was conducted “by a popular website” that Whitty didn’t want to call any further attention to because of the fifty films listed in the poll, “forty-nine of them had been released since 1980.” The other film listed was 1971’s “Harold and Maude.”

    “Apparently, their idea of ‘movie history’ doesn’t stretch back quite as far as mine,” Whitty writes. “[Commenters asked] where was The Apartment? It Happened One Night? His Girl Friday? Annie Hall? One of the list’s compilers responded with an online smirk, sarcastically thanking people for being upset. After all, that merely meant more clicks and ad revenue for the site so, you know, the joke was on us.”

    (How beauteous mankind is! O brave new world, that has such people in’t! … )

    In the piece, Whitty writes about how the collective ability to “read” a film, to engage with its language of images, gestures, and subtext, has eroded. It’s not just that we’re watching on smaller screens, but that the act of watching itself has become smaller, reduced to snippets consumed between other distractions. We’ve moved from immersive, communal experiences to fragmented, solitary ones, and in the process, the movies have lost some of their magic as cultural wayfinders.

    Once, movies were more than entertainment; they were instructional, aspirational, even mythological. They shaped our identities in ways we weren’t always conscious of. Whitty suggests that the decline of cinematic literacy parallels the diminishing influence of film as a cultural cornerstone. The shared language of cinema—how to interpret a long, meaningful glance, how to decipher a scene’s blocking, or how to understand a cut as more than a convenience—has been diluted. Instead, algorithms feed us what we already like, delivering a passive, surface-level engagement. The artistry of the medium, its ability to inspire a kind of active, participatory watching, now struggles to reach an audience trained to consume rather than contemplate.

    In the age of platform-neutral kids and ubiquitous content, what are we losing? Not just the bigness of the screen but the bigness of the experience—the lingering resonance of a film that stays with you long after you leave the theater, and the shared cultural fluency that came from a world where most of us watched the same stories unfold in the same darkened rooms. Whitty’s essay raises an urgent question: If cinematic literacy is fading, can cinema as art survive?

    Whitty goes on to challenge the very idea of “old movies,” quoting the late director, film scholar and occasional actor Peter Bogdanovich.

    “There are no old movies,” Bogdanovich would say. “There are only movies you haven’t seen before.”

    I understand Whitty’s frustration. Even would-be movie critics have sometimes startled me with their dismissive attitude toward what they invariably call “old movies.” In 2019, a would-be contributor to our movie section pitched a piece on the new Netflix film The Highwaymen about the Texas Rangers who tracked and eventually killed notorious outlaws Bonnie and Clyde.

    When I asked him how the film compared to—and whether it in any way paid homage to—Arthur Penn’s 1967 classic movie, this very bright young writer blithely replied that they hadn’t seen Bonnie and Clyde but that he’d heard from friends “it wasn’t any good.”

    Now it is fine not to like Bonnie and Clyde, but we ought to understand there are any number of films where we might say cinema splits off into new directions—where the once unimaginable is imagined and becomes part of the grammar of film going forward. John Ford’s Stagecoach (1939) is one of these. Alain Resnais’s Hiroshima, Mon Amour and Robert Bresson’s Pickpocket, both released in 1959, were others.

    And Bonnie and Clyde is where realistic hard violence married to gleeful comedy enters the American cinematic lexicon, specifically in an early scene where Beatty as Barrow shoots a middle-aged bank manager in the face after he’s jumped on the running board of their getaway car.

    This one of the earliest instances where American movie audiences are faced with the graphic consequences of violence. The camera doesn’t cut away from the victim, we see blood and what appear to be bits of brain and bone flecking the car window.

    You can draw a straight line from Bonnie and Clyde to the work of Quentin Tarantino. If you mean to write about the movies in any serious way, you need to know this stuff.

    If you’re going to be a responsible and alert consumer of culture, you need to understand how movies work and are different from books and music and photography and painting (though they in some ways encompass and recombine all these arts and disciplines).

    Cinematic literacy might sound like an arcane pursuit, but it’s really only responsible consumption. You want to know what you’re putting in your body; you want to know what you’re putting in your head.

    One of the great things about our digitized and time-shifting era is that we all have — or can easily obtain — access to a massive library of movies that are new to us. I’ve written before about our COVID-19-inspired project of watching (or rewatching) movies from past decades like The Friends of Eddie Coyle from 1973, or The Hit, a 1984 film by Stephen Frears that’s often overlooked.

    This isn’t like a compulsory Continuing Legal Education seminar; we’re doing this because it’s enjoyable, because part of what people want from the movies is transport, a removal from the quotidian.

    The impulse isn’t entirely nostalgic—I’d prefer to see a new old movie, something I’d missed or forgotten about, rather than revisit a movie that’s familiar.

    All of us have blind spots, and no one can possibly keep up with every movie (or album or book) ever released, but if your job is to write and think about movies, then it’s only responsible to try to keep up. No one has to care about anything in particular, but I’m with Whitty: If you mean to put out a list of the best romantic comedies in movie history, you ought to have an intimidating inkling of the vastness of that movie history.

    Whitty notices that this kind of ageism “seems to disproportionately apply to cinema.”

    “In other disciplines,” he writes, “works that have come before — whether it’s Beethoven’s Ninth Symphony, Dickens’s Great Expectations, Miles Davis’s Kind of Blue or Andrew Wyeth’s‘Christina’s World’—are seen as classics, as part of a continuum. They’re not simply written off as old, and true aficionados appreciate them on their own terms.”

    Anything that happened before we were born is as remote to us as ancient Rome. It’s not part of our reality.

    Whitty has noticed this, and thinks it applies especially to the movies.

    “I‘ve taught film students—many of whom want to make their own movies — who seem to think cinema started with Pulp Fiction,” he writes. He’s right. I’ve talked to wannabe film writers who have no interest in anything that came out before 1999’s The Blair Witch Project. (Which is, to be fair, a product of the last century.)

    Whitty has the idea that the problem isn’t the result of a “lack of access to media, but from too much.”

    When Whitty was growing up in the ’60s and ’70s, he writes, his  “TV options consisted of Channels 2, 4, 5, 7, 9, 11 and 13.” Seven choices — which was a lot, about the same my family had when I was a child in Southern California, quite a bit more than I had a high school student in Louisiana. 

    “But the thing was,” Whitty writes, “every one of them programmed movies, every day. Because there was no cable then—never mind videos or DVDs—most of these were older movies, from decades past. And they were simply part of the programming, seamlessly integrated with the new. You grew up just accepting them.”

    Whitty argues that while we have hundreds of options available today, classic films are relatively hard to find, at least compared to new horror films and comedies. 

    I take his point with some modification— when Whitty argues that it’s difficult to find old movies,m I think what he’s really saying is that it’s difficult to serendipitiously encounter old movies. If you are purposefully seeking then, there’s TCM and, for the snobs, The Criterion Channel (I’m a charter subscriber). But most people are probably not as mindful of their viewing as those of us who write about movies and television programs; some turn on the box and start scrolling.

    We saw what was put before us. We cleaned our plates and liked it. 

    Now the classic movies that are out there are lost in a sea of reality programming and algorithm-generated new suggestions. We’re not forced to watch older films because they are our only option—our options are seemingly endless. We can live in our own silos, never having to encounter anything we haven’t consciously considered.

    It’s like the hollowness I feel when browsing Apple Music or Spotify. Virtually all music is available, but this completeness comes at the loss of the serendipitious thrill of flipping through racks of a record store. If we know what we want we can get it. But we have fewer ways of discovering what we want by accident.

    What we lose is a cohesiveness of our culture. There are fewer points of common reference, fewer shared ideas in the common reservoir.

    Our Movies, Our Mythology

    “In other countries, many children still grow up on ancient folktales — Norse sagas, Greek myths, Arthurian romances,” Whitty argues. “But we’re a relatively young nation. We don’t have a wealth of stirring stories passed down from generation to generation. The few we used to have — the adventures of Paul Bunyan, say, or the tale of Johnny Appleseed — faded away long ago.

    “No, in America, the movies are our mythology, or used to be. They were a common cultural touchstone, and a way of explaining the land we lived in, and the people we met here. They provided cautionary tales, moral lessons, national symbols, cultural archetypes. They still can.”

    I’m not sure that the rest of the world isn’t more like America than Whitty is willing to concede here, but again his argument is sound. We don’t just learn history from history books — for more than a hundred years now, movies have instructed us on how to talk, flirt and carry ourselves.

    Safe at Home! as my first movie, I realize it wasn’t just about baseball or childhood heroes. It was my introduction to the magical language of cinema—the way a screen could amplify a moment, a gesture, or a lesson, however simplistic. As Steven Whitty suggests, movies used to be our mythology, shared stories that taught us how to live, dream, and even lie convincingly to our classmates. Safe at Home! was no Citizen Kane, but for a three-year-old, it might as well have been. It showed me how movies could blend fantasy and reality, shaping how I saw not only Mantle and Maris but also myself in the mirror of their larger-than-life personas.

    The movie’s simplicity—its black-and-white moralizing, its naïve hero worship—might feel dated in an era that prioritizes irony and spectacle. And yet, that simplicity also embodies what we risk losing as cinematic literacy fades: the ability to see movies not just as content but as cultural wayfinders, luminous ghosts who remind us of where we’ve been and where we still might go.

    We can recognize ourselves in these ghosts, we can identify with and relate to them.

    But we have to meet them first.

    November 22, 2024
    baseball, cinema, film, mantle, maris, movie, movies, writing

  • Taylor Swift (raw, unfinished mix)

    This is a raw mix of new performance of an old song, one we did for the Something for the Pain album in 2017. It’s not really about Taylor Swift. This is just me, on everything.

    November 20, 2024

  • Toronto 2001: Before and After

    In 2004, my wife Karen and I hosted a “open screening” for the Hot Springs Documentary Film Festival, an event where local (and not so local, we had artists from several surropunding states show up) filmmakers could screen their in-progress or un-platformed short films and get feedback from their peers.

    A composer named Hans Stiritz showed up with an amazing five-minute movie he called Before. Here’s how Hans described the project: 

    “Late in 2003, I rediscovered some 8mm film that my German grandmother had shot during several trips traveling to visit my family in America in the late-60’s/early-70’s. Sifting through all the film, I noticed that she always included several shots of airports and airplanes for each visit. Flying on the big jets was still something special in that day.

    “Using this ‘archival film,’ I created to take a nostalgic look at one family’s vision of the golden age of the ‘Jet Set’ and the magic of flight, and to consider how world events can cast new light (or shadow) on cherished memories.”

    The film was poignant and charming, until the very end, when the camera pointed out the window of a 747 and the Twin Towers of  Manhattan’s World Trade Center invaded the frame. I’m not sure I’ve ever been so devasted by an on film moment. 

    To give this some context, on the morning of September 11, 2001, my wife Karen and I were at Toronto’s Pearson International Airport, expecting to fly home through Chicago to Little Rock. 

    We had spent the previous four days at the Toronto International Film Festival, watching movies and going to parties and living out the surreality that comes from watching five movies a day and shooting tequila with Jake Gyllenhaal at night. 

    Actually that was at the 2002 festival; at a party for Todd Haynes’s film Far from Heaven. I can’t quite remember exactly what we did on the evening of September 10, 2001; we probably made an early night of it knowing we were traveling in the morning. What we probably did was—after an early screening of Jean-Pierre Jeunet’s Amélie—was have dinner in a Greek or Ethiopian restaurant and go to bed. 

    We had just made it through security and we in line to buy a sandwich to eat on the plane when I looked up idly at a small TV monitor fastened about ten feet high on a column a few feet ahead of us. I didn’t react at all the first time I saw the jetliner crach into the World Trade Center. And for a long moment neither did anyone else.

    It didn’t register as real. I thought it might be a trailer for a movie — I was in that mode. I thought it looked like cheap special effects that had been somewhat camouflaged by the director’s decision to present it as home video. I heard a man curse. I shrugged. 

    We milled about, then we headed down the corridor to our gate and only when we got there did we realize our flight had been cancelled. Something felt tilted as we swam back against the current, back out through security the wrong way. Somehow I arrived back at the check-in counter, where a preoccupied Delta agent told us there was something wrong with our plane, that we’d have to be re-booked. All the while he was tapping away on a keyboard that wasn’t giving him anything back.

    “I don’t understand,” he said through an apologetic smile. “I’ve never seen this before. U.S. airspace has been closed.” At that moment his supervisor appeared and whispered in his ear. I caught some of it — flight numbers. Commercial airliners.

    Things were snapping into place. I told them what we’d seen on television, three, maybe five minutes earlier. We all realized at the same time the seriousness of what had happened. It’s the moment where our lives broke in half.

    •••

    Maybe because it was the movie I saw in what I now think of as the “Before” but I have unreasonably strong feelings about Amélie. It’s the only movie I ever remember getting into a real argument over as an adult.

    I can and sometimes do offer strong opinions about the movies, but the cinematic experience is so subjective and personal it has always seemed silly to me to get combative about what are essentially matters of taste. But not long after Amélie was released I found myself at a dinner party where an academic suggested not only that Ron Howard’s A Beautiful Mind was a better film, but that Amélie was sentimental fluff for simple-minded people.

    I nearly flipped over the dining room table.

    Not that I have any particular animus against A Beautiful Mind — in my review from December 2001 I called it “a tasteful quality Hollywood motion picture — the kind that could win any number of Academy Awards” and concluded that, though some might call it “adventurous … because it takes as its hero an intellectual rather than a soldier or a spy … it is entirely conventional, unwilling to delve too deeply into the connections between creativity and madness, between inspiration and folly, and instead gives us another story of love conquering all, of the brave benighted sucking it up and just ignoring the demons calling to him.” 

    I stand by that; I generally find Howard’s work pleasant, sturdy, and thoughtful. Yet dinner-party Herr Doktor thought it the best thing since bagged Chianti, which he had every right to do. He thought it highly moral because it portrayed a protagonist afflicted by mental illness. Fair enough. (He also allowed that he believed that individual genes have souls.)

    But his slagging off on Amélie as a kitschy opiate for the masses did not sit well with me. Like a lot of other people, I love the film. It is a deeply humane picture that aspires to healing sweetness. And I recognize that the circumstances of how and when I saw it have much to do with my emotional attachment to the movie. It was — and remains — a souvenir of the world before we were used to magnetometers and the reflexive fear that rises whenever more than a few of us are gathered in a public place.

    The irony of such a stupid argument coalescing around such a gentle-tempered work is not lost on me.

    Some people will remember Amélie as the last film distributed by Miramax Zöe, the French division of notorious Harvey Weinstein’s production and distribution company. This association with Weinstein may be why Amélie, though nominated for five Oscars (sound, cinematography, art direction, original screenplay and foreign language film) was shut out at the 2002 Academy Awards. Even as far back as 2002, people were pretty sick of Weinstein. (Even I — a film critic in a tertiary market —was hearing rumors of his boorishness and bad behavior back then. I never heard about anything criminal but I wasn’t surprised the Weinstein story turned out the way it did.)

    Jeunet had battles with Weinstein on earlier films, and there’s anecdotal evidence that Weinstein wanted to re-cut Jeune’s film. It was only after it started winning overseas awards — four Cesar Awards, three European Film Awards, two BAFTAs — that Weinstein began campaigning hard for the film.

    Jeunet wrote in an online column that “the Academy, tired of Weinstein’s vote-collecting ‘abuse,’ decided to boycott his films.” Jeunet believes Amélie was collateral damage in this boycott. 

    “Whoopi Goldberg, president of the (Oscars) ceremony, spent the entire ceremony making fun of Weinstein,” he wrote. “The result being, out of 19 nominations, he won only one Oscar” in 2002.

    Amélie was originally distributed by UGC Fox Distribution, a French-American film production company formed in 1995 by GC — a company operating movie theaters in France and Belgium that until 1988 was known as Union Générale Cinématographique — and 20th Century Fox (now known as 20th Century Studios) to produce and distribute films across France. (UGC was absorbed into the French division of Fox in 2005.)

    U.S. distribution rights to the film were sold to Miramax Zoë. Lionsgate had a deal to distribute Miramax films on DVD. But we all know what happened to Weinstein and Miramax, and Sony Pictures Classics — an autonomous division within Sony Pictures — now owns the U.S. distribution rights to Amélie. SPC exercised those rights on Valentine’s Day, 2024, re-releasing the film in two hundred and fifty theaters across the country.


    Considering how much of an impact Amélie made on me, I was surprised when my rresearch revealed I didn’t review the film  for the Arkansas Democrat-Gazette. 

    But I sat beside the critic who did. 

    •••

    Amélie — the original French title was Le Fabuleux Destin d’ Amélie Poulain (The Fabulous Destiny of Amélie  Poulain) — is a whimsical and quirky tale of the guiless title character (Audrey Tautou), a painfully shy gamine in a Louise Brooks bob who comes to believe her life’s work is delivering uncomplicated happiness to the people around her through innocent Rube Goldberg-ish pranks. Jeunet can fairly be described as a sort of Gallic Terry Gilliam, as he often seems concerned with overly intricate mechanisms and the amplified unintended consequences of seemingly trivial occurrences.

    But Amélie lacks the kernel of bitterness that markedthe director’s earlier films, (especially those made in collaboration with the animator Marc Caro) like 1991’s Delicatessen and 1996’s The City of Lost Children. Those are fantastic dystopian visions populated with well-intended freaks, capitalistic scoundrels and imperiled innocents. They read more like disturbing dreams than black comedies.

    In contrast, Amélie is a romantic confection, as much a valentine to the Paris-locked village of Montmartre as it is to Tautou’s uncomplicated loveliness, which echoes that of another Audrey, name of Hepburn.

    A second delight can be found in the way that Jeunet employs computer animation, which in 2001 still had an air of edgy dubiousness about it. Yet here, when Amélie turns her wide brown eyes up to a brilliant blue French sky and sees bunny-shaped clouds, the effect is digital enchantment.

    While the film is set in Paris in 1997 — around the time of Princess Diana’s death — Jeunet conjures up a storybook city with all traces of modern banality digitally removed. This is a hyper-saturated Paris with the Pompidou Centre and glass towers of the Bibliothèque Nationale elided (just as the Twin Towers would be elided from some of the New York stories that screened at Toronto that year), a nostalgic CGI-derived Paris that feels timeless and dream-like, stylized in the way of Woody Allen’s Manhattan, a pastel Paris far warmer and less sinister than the one portrayed in Baz Luhrman’s lurid Moulin Rouge! that was released in the U.S. a little more than a month after Amélie was released in France.

    It is one of those odd cinematic coincidences that Moulin Rouge! and Amélie in some ways seem to be in conversation with each other — both are highly stylized films set in Montmartre, with Moulin Rouge” (which was mainly filmed on soundstages at Fox Studios in Sydney, Australia) portraying the neighborhood as a seedy and lewd corner of the world and Amélie proffering it as benign and neighborly; the cafe Amélie works in is a real place called Café des 2 Moulins — “the two windmills.” Both directors playfully indulge certain Parisian stereotypes. Neither is terribly subtle.

    The very funny first act of the film provides us with backstory. Amélie’s childhood and parents are sketched, largely through a comic voiceover (in French) by a rumbling Andre Dussolier who explains the particular tastes of Amélie’s neurotic mother and emotionally cool father, and how that effectively discouraged their daughter from making friends or becoming anything more than a waitress in a small cafe. All that is prologue to the day in her unremarkable apartment when she stumbles across a tin box hidden behind the tiles in her bathroom, belonging to a previous tenant, who was then a lonely little boy.

    Amélie contrives a covert way to reunite the owner — now a lonely grandfather — with his possessions, and his shock and gratitude convince her to make this sort of secret do-gooding her life’s work. These playful, childlike missions are interrupted when she meets and falls in very grown-up love with Nino Quincampoix (Mathieu Kassovitz, director of La Haine), a part-time cashier at a Pigalle porn shop whose shyness and fragility mirrors Amélie’s. Nino collects discarded photo-booth strips — he roots them out from beneath the booth with a ruler — and preserves them in an album.

    Amélie has been watching him from afar when he accidentally drops the album at a railway station. She sets about returning the album and getting Nino to fall in love with her. 

    How could he not? 

    On the other hand, It’s not hard to resist Amélie if you set out to do so. The movie’s  Paris is not real-life Paris, it has been largely scrubbed of crime and immigrants, and more than one dinner party academic has found a way to pronounce it “racist” over the years.

    This despite one of the featured actors, Jamel Debbouze, being of Moroccan descent. And the myriad ethnicities mounted in Nino’s photo album.

    It’s a fairy tale; some think it should be more representative of the multicultural reality of Paris. I don’t suppose that’s an illegitimate idea, but my philosophy has always been to grant movies license enough to work whatever magic they might possess. If I’m not put off by a film’s politics — and yes, every film has politics — I don’t probe for reasons to take offense. I realize this attitude can make me seem like a soft grader but so be it. I’m one of those movie critics who likes movies and who can find watching even a dull  or insipid movie genuinely interesting, even if I’m only trying to suss out what went wrong. 

     (Were up to me, I’d never distill a movie into a numerical score, a thumbs-up or down, a ripe or rotten tomato, or a trio of dancing popcorn boxes. I’ve used grading scales only at the insistence of editors, not because I find them meaningful or useful. When tasked with creating a scoring system for films, I intentionally made it complex, drawing inspiration in part from the nuanced approach employed by wine critic Robert Parker to evaluate wines.) 

    In her review, Karen concluded that Amélie “succeeds because, despite its setting in one of the world’s most sophisticated cities, despite adult situations, despite romantic entanglements, and despite its unique interweaving of computer animation around a real world, it still retains a naive sense of wonder. Amélie’s picturesque, peculiar universe is not like ours, and that makes it a desirable destination, at least for a couple of hours.”

    A couple of uncomplicated hours of escape. Which is what a lot of people are looking for when they go to the movies, Herr Doktor.

    •••

    Back in the first few moments of the After, none of us knew what to do, so adrenaline took over.

    Karen went to the pay phone—neither of us had mobile devices, yet, and if we’d have had them we’d have left them at home to avoid international roaming charges (friends of ours had reported racking up exorbitant bills thanks to international roaming, our newspaper had provided us with a calling card that allowed us to charge long distance to the company after punching in a comically long series of numbers). She called our hotel, to try to get our room back. 

    The shaken Delta agent provisionally booked us on a next-day flight. I went to the Thomas Cook counter and got some fresh Canadian cash.

    As we walked out of the airport, a woman stood in the middle of the ticketing hall crying softly into her cell phone.

    We got in a limo and rode back to Toronto, as our driver—a tidy middle-aged Sikh in a blue blazer and crisp white shirt—told us “they” had hit the World Trade Center in New York, the Sears Tower in Chicago, and that a car bomb had gone off in front of the State Department in Washington.

    I held Karen’s hand and rather self-dramatically told her it would never be the same again, that we were in a different kid of war. 

    We decided we needed to somehow get to New York, thinking that maybe we could rent a car, it was about an eight-hour drive away. We got back to the hotel, up to our room—a different one than the one we’d had, but a santuary nevertheless—and called our office. We were closer to Manhattan than any other newspaper employees, we knew the city, we should go to New York and report what we could report. Already there were some reporters on the road. 

    But then we learned the border had been sealed and a security perimeter had been set up for fifty miles around New York City. No rental cars were available anyway. The Hollywood Elite had snapped them up. Some of them had rented buses. But nobody was getting across the border, probably not for a few days.

    So we went back to the festival, to sit in the dark for a while. 

    We went to a screening of Pinero, a film by Leon Ichaso about Miguel Pinero, the Puerto Rican poet and playwright who rose to prominence as a co-founder of the Nuyorican Poets Cafe in the mid-1970s. About fifteen minutes into the film, which is set mainly on New York’s Lower East Side, not too far from where the towers stood, the lights came up and a festival volunteer announced that the screening, along with all other screenings planned for that day, was canceled, as were all press conferences and social events.

    So we wandered over to the festival’s press lounge in the Park Hyatt, in the press room set up for the Toronto International Film Festival. For a while we sat on the floor with hundred other journalists from all over the world, watching what the CBC warned was raw unedited footage. They showed a thrashing figure leaping—or falling—from one of the towers. They showed people cheering in the West Bank. Someone cursed, someone giggled nervously, heads turned. A photographer from the Toronto Star took our picture.

    We left, to walk the streets. Bomb threats were being phoned in to government offices in Ottawa. There was a police and fire cordon around a block of Bay Street in front of the Royal Ontario Museum. Karen asked what it was about and the fireman told her “public safety.” We ate our airport sandwiches in a park filled with students and mild sunlight. There were hand-lettered signs in the shop windows: “Our Canadian hearts are with you.” 

    There was a little girl on the television saying it was sad because Canada and America “were the same.” Stores were closed, tall buildings emptied. Everywhere were tender looks and soft words; patience spread like a balm.

    I remember the festival — it was a good one, along with Amélie, we saw Richard Linklater’s Waking Life and David Lynch’s Mulholland Drive.I remember talking to Arliss Howard and Debra Winger about our mutual friend, the Mississippi writer Larry Brown, whose short stories had formed the basis of their movie Big Bad Love. I remember Joaquin Phoenix in Buffalo Soliders, an enjoyably caustic story about military inequity.

    You probably have never heard of Buffalo Soldiers. It took two more years for it to make it into theaters; Miramax, who acquired distribution rights to the film on Sept. 10, didn’t think the public would be receptive of a movie that portrayed American troops as cynical schemers and thieves. 

    It wasn’t the only festival movie affected by the attacks, a madcap Barry Sonnenfeld comedy called Big Trouble  which was based on a Dave Barry novel about a plot to smuggle a nuclear device onto a plane. It was originally scheduled to hit theaters immediately after the festival; the release date was pushed back to April of the next year. When it was eventually released, Big Trouble struggled at the box office, earning just $8.5 million against a $40 million budget. While its reception may not have been stellar regardless, we can surmise that the delay and content sensitivity contributed to its underperformance. In April 2002 people were simply not ready to consume movies about bombs on airplanes.

    Similarly, the New York-set Serendipity — honestly a rather horrible romantic comedy — was delayed a week or two so that shots of the Twin Towers could be edited out. Even Ed Burns’ modestly charming  Sidewalks of New York was pushed back to allow a a little judicious trimming.

    •••

    Most of the restaurants in Toronto were closed that night, but we found a small Sherpa restaurant off Bay Street, we drank wine and ate momos and talked about our dread. 

    When the airports remained closed on September 12, we decided to try to make our way to Cleveland on the bus, where Karen’s father would put us up for the night. The border was open again, the customs agent assured us. We re-re-booked our plane ticket to take us out of Cleveland Hopkins, figuring it might be easier to get home on a domestic flight. We though we’d have more options in Cleveland, or at least a place to sleep for free. Maybe we really just wanted to keep moving.

    The next  morning we made it over the border ona Greyhound bus. There were fourteen of us, and we were told we were the first to cross the border after it was re-opened. I talked briefly to one of our fellow travelers, he had flown to Toronto from Boston; his plane had left Logan Internation at approximately the same time as the hijacked planes. At U.S. Customs, they didn’t even look in our bags, they waved us through. It couldn’t have taken ten minutes for us to clear.

    Waiting to change buses in Buffalo, we were jostled—rudely, I thought—by a man named Mohammed. (I looked at his luggage tag.) I cut hard eyes at him but he turned out to be a simple businessman, on his way to Cincinnati, then on to Birmingham, Alabama.

    We settled in, across the aisle from a young woman with two small, beautiful children and a couple of rows ahead of an old couple dressed in Navy uniforms, both of them wearing dark glasses—the wraparound kind people wear after they’ve had something medical done to their eyes. Our bus was quiet and its passengers sedate and we hummed along the south edge of Lake Erie, through the farmlands and the shooting woods of the American Midwest, past prisons that look like high schools and high schools that feel like prisons.

    I wrote a column on that bus, scribbling in longhand on foolscap, which ended with these lines:

    You have caught up with me now, I am writing this on [the] bus straining to make sense of what we’re going through and how the world has changed. It does me no good to tell you the details I think I know, by the time you read this some of what I think I know will have turned out to be misinformation, the rest will be old news.

    You have to trust me on this, because I know it sounds like a cheap trick, the kind of detail a hack might make up, but the opposite window has just filled with the chrome wall of a semi-trailer, on which is decaled the words “American Pride.” There is an eagle anda stars and stripes motif and the proprietor’s—the driver’s?—name and pertinent info stenciled across the back doors: “Geo. H. Golding Inc./Lockport, N.Y./Crystal River, Fla./Committed to Personal Dependable Service.”

    I have never paid much attention to displays like these, I’ve never felt much one way or the other about them. But I can tell you that right now, this minute, I needed to see Mr. Golding’s truck, I needed the irony-free symbolism and the naive corny faith of it, the Rockwellian amour-propre of the American working class.

    I am on a bus to Cleveland, my wife is dozing in the seat beside me and I am scribbling in my little book and I know we will be all right. I know things have changed, but it’s not all for the worse. We are together on the bus, we meet each other’s gaze, we speak, we help each other with our bags.

    We can get used to riding buses, it’s not so bad to hug the ground and see the places we routinely fly over. We all have our fly-over territories, places we hardly ever visit. Some of those are out there.

    I can feel a hawkishness rising in me, and I sense most of the people on this abus feel it too. I don’t know if it’s the worst part of me or not, but it’s probably a necessary part. The enemy has convinced kids they ought to want to die for his dubious cause,
    a cause which seems to consist of nothing more or less than a hatred of democracy and the abundant liberties of the modern world.

    We can see this as an opportunity to make abetter world. We are right and they are wrong. It’s not a relative question. Bad people have blown a hole in the world and we must match their resolve if not their cruelty. And so we will.

    We made it back —we took a bus to Cleveland, rented a car and drove straight through to Little Rock under empty skies. And got on with the second part of our lives.

    In the After. 

    November 18, 2024
    amelie, film, movies, news, streaming

  • Photograph of Jesus

    November 18, 2024

  • Pete Rose

    November 18, 2024

  • The “Bang, Bang, You’re Dead” Stuff

    In 1993, the “Murder Is Not Entertainment” movement was initiated by the National Organization of Parents Of Murdered Children, Inc. There was a brief flurry of activity by members of the group in my hometown of Little Rock, Arkansas, in 1994. If I remember correctly, local theaters showing violent movies were picketed, and flyers were handed out in front of a local playhouse. I have not heard much about the movement since it still exists thirty years later.

    While the phrase “murder is not entertainment” might seem somewhat Pollyanna-ish given the cultural history of mankind, the movement’s main concern has always been the commodification of actual tragedies, where real murders are turned into consumable entertainment, often without regard for the victims and their families. True crime podcasts, documentaries, and sensationalized films based on actual crimes would seem to be their primary concern.

    But MINE has also voiced its concerns about fictional portrayals of murder, primarily when they are marketed to children and/or glamorize violence in ways that could desensitize viewers to real suffering. It advocates for responsible portrayals of violence in all forms of media, emphasizing that murder—even in fictional settings—should not be trivialized, glamorized, or treated as merely a plot device.

    I don’t exactly disagree with MINE’s position. For several years, murder was my business; I was a cop reporter for a newspaper that emphasized crime stories in a city that averaged more than a murder a day in the early 1980s. I have been in rooms with the freshly killed. I have talked to murderers, I have seen things that I wish I had not. 

    I have told violent stories.

    Sometimes, I’ve relished them. As a reporter, I wanted my stories to be compelling and to resonate with readers. I wanted people to read my accounts for their style and pacing and how they limned the human condition and spirit. I wanted to write stories about murder with inherent drama that people couldn’t look away from, but that told us about something more than simple crime. But in doing so, I realized that I was part of a larger, almost unbreakable fascination that Americans seem to have with violence.

    •••

    About fifteen years ago, a young woman I knew slightly was murdered. She was beaten to death during a home invasion. She was a news anchor for a Little Rock television station and was well-known in the community. She was a local celebrity but also one of those vibrant and charismatic people seemingly marked for bigger things than local news. While my wife and I did not know her well, we shared many friends. 

    By coincidence, we had attended a charity event she was hosting the night before her attack at the Arkansas Governor’s Mansion. We walked our terriers, dressed up for the occasion, down a red carpet runway, and the young woman described their couture.

    The next evening, she was beaten unconscious. Five days later, she was dead. 

    Police detectives theorized that her assailant—who was later convicted and sentenced to life in prison—entered her house through her dog door and probably did not realize she was home at the time.

    We were shaken by the news. For me, the sensation was a strange commingling of shock and guilt and an uneasy and uncomfortable feeling of irrational culpability. It wasn’t so much that I was close to the young woman as I was close to people who were close to her—within a week of her death, two people told me they considered her their best friend. 

    A few weeks after her death, I had a conversation with a young man who had been very close to the young woman. Though their relationship was platonic, it was very deep, and her death had hollowed him out. He was drawn and tearful when we talked. He said her death had led him to question his love of horror movies, especially torture porn films like Jame Wan’s Saw (2004) and Eli Roth’s Hostel (2005). These films, he noted, skewed more realistic than the classic Giallo films like Suspiria (1977) or The Bird with the Crystal Plumage (1970) and the supernatural monster movies he’d grown up watching. He wondered if he hadn’t desensitized himself to brutality and if realistic depictions of violence weren’t inherently immoral.

    I wasn’t satisfied with what I told him, which was the fact that he was asking these questions seemed to me a sign of character, and that after going through something as tragic and real as what happened, it made sense for us all to take stock of the images and ideas we unthinkingly consume in pursuit of entertainment and escape. I said I thought it was good that he was questioning his own complicity in what we might all agree was a world gone mad. I said we all should undertake this kind of moral inventory from time to time.

    If I wanted to rationalize his—and my own— fascination with depictions of violence, I would say there is a fundamental difference between what happens in the movies and what happens in real life, just as there is a difference between what we imagine and what we bring into the actual world. Horror can be a controlled way to experience intense emotions—fear, suspense, even shock—in a safe space. Watching horror movies doesn’t necessarily signal approval of violence, though I would imagine that violent people are less dismayed than most by violent imagery. 

    Still, my instinct—my guess— is that horror can serve as an outlet for emotions or curiosity, almost like a release valve. It could help us cope with tough stuff. 

    A school of thought holds that the brain struggles to distinguish between lived experience and vividly imagined or dreamed experiences. While I don’t believe it is that simple, it is a compelling notion, especially given how horror films affect us. Our brains and bodies often respond as if the onscreen threats are real: our heart rate spikes, stress hormones increase, and we might even jump or feel tense.

    But while the brain engages deeply with the fear, it also continuously monitors context clues to confirm the threats aren’t real. We keep telling ourselves, “It’s only a movie,” and this awareness acts as a buffer. Some studies suggest that even as fear activates, the “contextual control” systems in our brain—assessing whether a situation is safe or dangerous—reassure us. So watching a horror movie—even watching torture porn—is not the same thing as witnessing a brutal crime.

    I don’t know about watching so-called “snuff films,” or the Faces of Death series of shockumentaries that had a vogue in the late 1970s and ’80s. Even though snuff films were an urban legend and a significant portion of the “deaths” depicted in the FOD series were staged with actors, special effects, and makeup, if the audience’s expectations were that they were watching the real thing, that would seem to change the dynamic. As a crime reporter, I did cover a case where a developmentally disabled high school kid murdered a female classmate after a party where one of the Faces of Death movies was shown. (This was not a detail I elided from my story.)

    But we shouldn’t be surprised if sadists gravitate toward violent films; their embrace of gore might be seen as more symptom than root of their pathology. Human beings are complex and few monsters fit the precise profiles we hold for them. Hitler was kind to animals; Jeffery Dahmer was not. 

    It’s entirely possible that  horror movies provide healthy people a way to experience the thrill of terrifying scenarios in a safe, controlled setting. Some people find this cathartic: confronting one’s fears in this way can allow us to feel stronger. Maybe horror can even help us process real-world anxieties in a fictional context, allowing us to explore our fear without consequences.

    Still, when the real world breaks in on us—when a friend is murdered in real life—onscreen violence is going to feel very different. When we understand the real-world consequences of violence, it’s more challenging to accept it as an entertainment trope. Murder is not entertainment—but perhaps fictional violence can serve as something else entirely. It can be a way to probe the jungles of the psyche and confront fears that otherwise haunt us in silence.  

    Still, after we encounter the raw reality of violence, that threshold between fiction and life grows thin, and we can no longer ignore the ways these images shape us. Garbage in, garbage out— we are what we consume.

    Violent films and television shows can offer release, a tool for reflection, or a window into the human condition—but when actual loss touches us, we are reminded that even the most vivid onscreen horror on screen pales compared to the bad things we do to each other. 

    •••

    In her 2003 essay collection Regarding the Pain of Others, Susan Sontag argues that photographs of war and suffering often serve as a kind of crude spectacle, eliciting in the viewer a response similar to that of entertainment. Sontag examines the ethical tensions in viewing such images, acknowledging the human impulse to look at these painful scenes —to gawk—while questioning whether viewers are genuinely moved or changed by them. 

    “Someone who is perennially surprised that depravity exists… has not reached moral or psychological adulthood,” she writes. Often, our fascination with violent images arises from a kind of moral immaturity—a desire to look at atrocity from a safe distance without genuinely confronting their reality. Sontag argues violent images can be a kind of pornography. 

    Still, we’ve noted evolutionary biological reasons that witnessing or imagining violence might produce a physiological thrill, releasing pleasant doses of adrenaline and dopamine within us and creating excitement and engagement. This response was initially meant to prepare early humans for danger—a danger that has now been removed and rendered imaginary. The bullet that screams toward the viewer from the barrel of the gun held by the outlaw Justus D. Barnes at the end of The Great Train Robbery (1903) was not going to hurt anyone. However, there’s some truth to the myth that audience members were startled because they had never experienced such a lifelike depiction of moving images before.

    Film was an excellent medium for producing these exploitable dopamine moments. People feel a “safe thrill” when they see violence in a controlled setting, allowing them to experience intense emotions without any direct personal risk. The movies are, by and large, our safe space. 

    Or so it seemed until July 20, 2012, when a gunman dressed in tactical clothing and employing smoke canisters killed twelve and wounded seventy after opening fire during a midnight screening of The Dark Knight Rises in a Century 16 movie theater in Aurora, Colorado. (There were two other attacks in American movie theaters in 2015, with four more fatalities. As of this writing, there hasn’t been another. Still, we can no longer assume that theaters are immune to the violence that permeates other parts of our society, an inviolable boundary between the make-believe and actual. The horror has come off the screen, leaving a lingering sense of vulnerability in spaces once considered sanctuaries of escapism. Like so many other public places, movie theaters have become reminders of our inability to separate real threats from fictional ones, challenging our belief that we can fully experience fear, thrill, or suspense in total safety.

    The allure of violence in American culture runs deep. Born of Puritans, we are more abashed by the horrors of loving  flesh than in flying steel. We will shield our children from what we deem lurid and prurient, but we allow them to play with guns and watch thousands of characters die in awful ways on their screens.

    We invented the myth of the gunfighter. We’re a society that venerates outlaws and immortalizes antiheroes. From the myths of the Wild West to Hollywood’s silver screen, violence has become a staple of American entertainment—a paradoxical inheritance from a culture that claims to prize peace. But what is it about the violence that captivates us? Why do we, as a society, consume stories of bloodshed with such eagerness, as if they were cautionary tales and fantasies all at once?

    Our sacred freedom contains room for fascination with bad things. American culture revels in stories of heroes who fight, who transgress, who test the boundaries of morality. Perhaps it’s not violence we love, but rather the thrill of freedom pushed to its extreme—the notion that anyone, regardless of background, can become an outlaw hero.

    This fascination is rooted in a long historical and cultural lineage. From the earliest days of the frontier, America has been drawn to figures like Daniel Boone, who pushed westward and clashed with Indigenous nations. This pioneering spirit, mingling with an often romanticized notion of “Manifest Destiny,” produced a cultural archetype: the rugged individual who uses violence as both a tool and a defense, not necessarily for justice but for survival and, in some twisted way, for freedom. It’s an inheritance that led to the creation of distinctly American legends: the lone cowboy, the lawless gunfighter, and, eventually, the modern serial killer, who simultaneously repels and fascinates us.

    I’m not sniffing at this tendency to treat violence as entertainment; I recognize that some of the gentlest people I know are hardcore horror fans. I’m not a particular fan of the genre, but I am a fan of hard-boiled crime novelists like Jim Thompson and Raymond Chandler; I like most of Sam Peckinpah and Quentin Tarantino’s films. 

    I recognize it in myself, even if I do not understand it. I like the “bang, bang, you’re dead” stuff.

    •••

    I like some very violent movies very much; watching Arthur Penn’s Bonnie and Clyde (1967) was one of the formative adventures of my childhood.

    At least, I believe it was. 

    I cannot always reconcile my memories with how the world works—I cannot believe I could have seen Penn’s movie at a matinee as a ten-year-old, unaccompanied by any adults. I must have seen the film itself later, maybe I saw it for the first time on television. But there are a few things I feel sure of.

    Our parents dropped my nine-year-old sister and me off at a small theater in a small town, maybe in California or Georgia, on a Saturday afternoon. 

    Parked in front of the theater was an extended trailer that held what the exhibitors claimed was Bonnie and Clyde’s “death car,” a 1934 Ford Fordor V-8 with a desert sand paint job. For 50 cents, you could enter the trailer, walk around the bullet-riddled car, and look through its windows at Bonnie Parker’s notebooks, Clyde Barrow’s Thompson machine gun, and other paraphernalia. 

    The carny in charge made sure I noticed the gold death masks of Clyde Barrow and Bonnie Parker—bland, stiff, and anonymous. They could have been anybody.

    In real life, Clyde was a scrawny, screwed-up seventeen-year-old car thief who was sold to a cellmate for a carton of cigarettes during his first stint inside the penitentiary. 

    Bonnie married a safecracker named Roy Thornton at sixteen and tattooed his name and a pair of hearts on her right thigh before he went away to prison for ninety-nine years, for good. She then worked in cafes and police bars in Kansas City and Dallas before taking up with the man who was later to become Clyde Barrow’s lieutenant, Ray Hamilton. 

    There is some speculation that Hamilton was Clyde’s genuine love interest, that perhaps the only members of the Barrow gang who never shared a bed from time to time were Bonnie and Clyde. 

    Others claim that Clyde was “robustly heterosexual” and that Warren Beatty invented Clyde’s impotence to add complexity to his character. The first time she gets him alone, Clyde begs off, telling her he’s “not much of a lover boy.” “Your advertising is just dandy,” a frustrated Bonnie replies. “Folks’d never guess you don’t have a thing to sell.”

    (The character was initially imagined as bisexual, but Penn and Beatty thought this might lead audiences to attribute his murderous tendencies to sexual deviancy, a surprisingly progressive idea in 1967. )

    I don’t know the truth, and that facts are superfluous in the legend-building matrix, where images are supreme. Clyde’s weak, watery features and Bonnie’s birdy looks blur across the twin filters of time/nostalgia and grainy photographs.

    I’ve seen some old pictures where Bonnie seems pretty and others where she seems as lined, complex and stark as any Depression-era portrait. It doesn’t matter. Bonnie and Clyde drove dead into the fabric of American road myth; they are forever young, beautiful, and dangerous in our collective consciousness. 

    After the guns cooled, they hauled the bodies and the death car to Arcadia, Louisiana, stopping at a school to let the children peer into the car and rub their small hands across it, the ruptured sheet metal an object lesson. They laid Bonnie and Clyde out in front of Conger’s Furniture, and thousands of people came from two and three states away to view the riddled corpses. (The owner of the bullet-violated Ford, Ruth Warren, had to file a federal lawsuit before the Bienville Parish sheriff would release it to her.) 

    Inside, we watched some old newsreel footage, some of it captured “by an amateur photographer five minutes after” Texas Rangers had shot the car and its occupants to pieces on a stretch of road between Gibsland and Sailes. 

    (I’ve been to that site — marked by a graffitied and pocked tombstone-like monument — on Louisiana Highway 154 a dozen times. For a while in the 1990s, my other little sister, who wasn’t yet born when I think I saw  Bonnie and Clyde for the first time, raised quarter horses a few hundred yards away. It feels like nothing there, only a low spot in the piney woods.) 

    They staged a reconstruction of the ambush just “days after” the criminals were assassinated (there is no more perfect word) in 1934. You can find that film on YouTube today. I’m sure it’s the same footage we watched.

    I don’t know what real movie we might have watched that afternoon.

    •••

    Penn’s Bonnie and Clyde is where realistic brutal violence married to gleeful comedy enters the American cinematic lexicon, specifically in an early scene where Beatty as Barrow shoots a middle-aged bank manager in the face after he’s jumped on the running board of their getaway car. This was one of the earliest instances where American movie audiences were faced with the graphic consequences of violence. The camera doesn’t cut away from the victim’s face; we see the blood and what appear to be bits of brain and bone flecking the car window. 

    Clyde is visibly shaken by the episode; he protests that he didn’t want to do it and blames the getaway driver, C.W. Moss (Michael J. Pollard), for parking the car rather than idling outside the bank.

    It is the first sober punctuation in a film that, for the most part, plays effervescent comedy and jazzy romance. It’s another kids-on-the-run story in the mode of Fritz Lang’s You Only Live Once (1937), Nicolas Ray’s They Live By Night (1948), and Joseph H. Lewis’s Guncrazy (1950), all of which were inspired by the actual Bonnie and Clyde. 

    Penn’s Bonnie and Clyde weren’t the first antiheroes to appear on screen; Lang’s sympathies obviously lay with the doomed Joan (Slyvia Sidney) and Eddie Taylor (Henry Fonda) — at the end of You Only Live Once he had then ushered into heaven by a vision of a kindly prison chaplain Eddie had accidentally killed during a jailbreak. Ray started They Live By Night with a tender moment between his romantic outlaws over which words are superimposed: This boy …  and this girl …  were never properly introduced to the world we live in… 

    Yet, while contemporary audiences are likely to find these films corny today (I screened They Live By Night to a group of upscale retired folks who’d signed up for a summer series I curate, and I was surprised by their indifference to it; one gentleman confided that if he’d encountered the movie on T.C.M., he’s “have changed the channel), Bonnie and Clyde retains much of its power to shock and discomfit. It is a strange marriage of glee and horror, and the abrupt shifts of tone (which, at the time, I didn’t realize were imitative of the French New Wave) can still rock us. 

    At the movie’s end, the ambush of Bonnie and Clyde is presented in a brilliantly edited, realistically gory fashion. The filmmakers take their liberties — in real life, Clyde Barrow never got out of his car as Warren Beatty does; he died at the wheel, with Bonnie slumping into him. But the scene’s improved by separating the pair; we see Clyde’s face as he realizes it’s a trap, and the sun glints off his preppy spectacles. (I remembered the ending slightly differently — somehow, I put a pistol in Clyde’s hand and had him returning the Texas Rangers’ fire as he rolled across the asphalt. But that scene isn’t in Penn’s movie, just mine.)

    We watch their bodies jerk in slow motion. We hear Faye Dunaway as Bonnie shrieks.

    “We were operating in a totally different social context in those days,” Penn told Terry Gross on her Fresh Air interview program in 1989. “It was in the midst of the Vietnamese war and the daily news, the news that we saw on television, had body counts — numbers of soldiers wounded and dead — and it was a time where, it seemed to me, where if we were going to depict violence we would be obliged … to depict it accurately, with the kind of terrible, frightening volume that one sees when one is genuinely confronted by violence. And that’s what we did in Bonnie and Clyde….”

    The ending, he says, was “an attempt to raise these two characters to a faintly mythic proportion … to propel them upwards into myth.” 

    Penn said he filmed the ending with four cameras ganged together running at different speeds: “The intention there was to get this kind of spastic motion of genuine violence, and at the same time, the attenuation of time that one experiences when you see something, like a terrible automobile accident…. this extraordinary stretch of time, while these events were taking place. ” 

    I didn’t think much about how Arthur Penn achieved the “spastic, balletic violence” that marked the film’s ending at the time — I’m still not very interested in the technical details of a given movie. I only remember receiving the ending as terrible and strange. It was beautiful in a way that I couldn’t quite express, though it had something to do with the shadows falling across my country at the time. 

    I didn’t think about how Bonnie and Clyde were victims as well as murderers or about how they were particularly American types, our kind of Romeo and Juliet, living fast, dying hard, and leaving ruined and riddled corpses. 

    I didn’t connect Bonnie and Clyde with Vietnam at the time, though I bet Oliver Stone did. When he made Natural Born Killers, he made sure that the man didn’t do in Mickey and Mallory. They got away, more or less, which is one reason N.B.K. unsettles so many of us, why so many people who might have felt a little wistful at the end of Bonnie and Clyde were angered by Stone’s audacity. 

    Bonnie and Clyde might have been the first movie I loved for itself. Maybe it’s not technically true, but I still think of it as one of the first movies I saw without adult supervision. I wasn’t ready to see it, I probably shouldn’t have seen it as a pre-adolescent (and while I almost certainly didn’t see it at the matinee I remember seeing it at, it wasn’t long after that I saw it), but I did. When my father took me to see Butch Cassidy and the Sundance Kid shortly after it opened in 1969, Bonnie and Clyde was already part of my frame of reference.

    I know this because I distinctly remember (just as distinctly as I remember seeing Bonnie and Clyde at a matinee)  talking about it with him afterward. I told him I thought the endings of the two films were virtually the same — the glamorous and outnumbered outlaws died in a hail of authoritarian gunfire. But he argued that because George Roy Hill ended his movie with a freeze frame rather than jerking bodies, he had allowed for the possibility that Butch and Sundance had, in fact, escaped. 

    My father thought Hill’s ending was better; I thought — or at least I believe now — that the ambivalence of the ending is wishful but tonally appropriate. Butch and Sundance didn’t get away; like Bonnie and Clyde, they died in a hail of gunfire. But it was pretty to think they might have.

    What I didn’t know then is that Hill’s ending was a homage to The 400 Blows (1959), François Truffaut’s semi-autobiographical film that also ends on a freeze frame. Truffaut’s purpose is explicitly reneging on the tacit contract between the filmmaker and the audience.

    We expect a resolution in exchange for two hours of attending to the filmmaker’s narrative. But in The 400 Blows, Truffaut withholds this resolution, suggesting that there’s more to the story of young Antoine than we’ll get to see. Hill doesn’t really do this in Butch and Sundance, but he teases us with the “what if?” the outlaws got away. And for a possible answer, we can see Mateo Gil’s 2011 Blackthorn, which starred Sam Shepard as an aging Butch Cassidy living in hiding in Bolivia. I love how movies can converse with each other across decades.) 

    Anyway, Bonnie and Clyde was a movie for its time — it was released during the Summer of Love but anticipated the curdling of the hippie. Vietnam was playing in American living rooms every evening. Bobby Kennedy and Martin Luther King Jr. had months to live. Charles Manson was creeping-crawling through the Southern California desert. L.B.J. would soon abdicate. It didn’t seem like the center could quite hold; outlaw nihilism felt like a reasonable option. 

    Bosley Crowther wrote a long and peevish review of  Penn’s film in the New York Times that called it “a cheap piece of bald-faced slapstick comedy that treats the hideous depredations of that sleazy, moronic pair as though they were as full of fun and frolic as the jazz-age cut-ups in Thoroughly Modern Millie…   [S]uch ridiculous, camp-tinctured travesties of the kind of people these desperadoes were and of the way people lived in the dusty Southwest back in those barren years might be passed off as candidly commercial movie comedy, nothing more if the film weren’t reddened with blotches of violence of the most grisly sort… This blending of farce with brutal killings is as pointless as it is lacking in taste since it makes no valid commentary upon the already travestied truth.”

    He concluded: “I’m sorry to say that Bonnie and Clyde does not impress me as a contribution to the thinking of our times or as wholesome entertainment.” 

    Some people say he was fired for that misjudgment, and maybe he was. And perhaps he should have been — Crowther, who was sixty-two years old then, wrote three more negative pieces about the movie and referred to it negatively in several reviews of other films before being replaced as the Times’ critic in early 1968.

    Meanwhile, in the New Yorker, Pauline Kael wrote that the violence in Bonnie and Clyde was “a kind of violence that says something to us; it is something that movies must be free to use.”

    Kael makes some very subtle points in her review; she didn’t like The Dirty Dozen (my father and I did) and wrote that the violence in that film “personally [the italics are hers] offended” her — she wouldn’t deny the filmmakers the right to use its graphic depiction as a tool. Yes, there is a danger in depictions of violence; people can be warped by what they experience and consume, but “[p]art of the power of art lies in showing us what we are incapable of. We see that killers are not a different breed but are us without the insight, understanding, or self-control that works of art strengthen. The tragedy of Macbeth is in the fall from nobility to horror; the comic tragedy of Bonnie and Clyde is that although you can’t fall from the bottom, you can reach the same horror.”

    In other words, we manufacture monsters out of people. There, but for the grace of art, go all of us. The unrefined soul is dangerous and liable to act out of fear and the tribal imperative. Movies are like travel in that they might cure us of prejudice and ignorance. I would not regulate what they can show us.

    There has always been a constituency for dark stories, and our particular American tradition is rife with murder ballads and bloodbaths. Shakespeare wasn’t dainty—there is a dark yen in the human animal, a drive for extinction that rivals the urge for sex. And it is from these base and desperate urges that art is made. We make things from bones, blood, the humors of the body, and invisible things that float in the air.

    Part of the power of art is also that it shows us we are not so different than our monsters. Goethe could not imagine a crime he could not commit; Kael says art might save us from nihilism. The schoolmarms and the White Citizen Council fear that impressionable minds will imitate the beautiful violence or the unleashed sexuality they see on the screen: Monkey see, monkey do. 

    It’s naive to imagine that some people don’t directly copy what they see on screen. The movies teach us how to talk, dress, and flirt. If you’re the sort of person who is inclined to kill, then maybe the film will give you ideas on how to do that. I think that violence in the media we consume is a risk factor, but human beings have always been consumed by this sort of material. 

    •••

    The death of the “good” Sgt. Elias (Willem Dafoe) in Platoon feels like the end of Bonnie and Clyde. It’s shot in slow motion to the same apotheosizing ends. But Stone carries it even further; as Elias emerges from the jungle, chased by North Vietnamese Army regulars, we don’t hear screams, gunshots, or the whirring of helicopter blades, only the swelling of Samuel Barber’s “Adagio For Strings” on the soundtrack as Elias falls to his knees, receiving bullet after bullet.

    Yes, it’s beautiful, but it’s not a lie. Or at least it doesn’t feel like one. 

    That’s the problem, isn’t it? Some uses of violence are cheap and offensive, but we don’t proscribe them because people like Arthur Penn and Stone can use violence in artful ways. And we can argue about who makes good use of it and who is merely exploitative. While Kael didn’t like The Dirty Dozen; I can make a case for Robert Aldrich (who famously said he didn’t believe “violence in films breeds violence in life” but that “[v]iolence in life breeds violence in films.”

    Maybe we’re just inherently violent creatures, though when faced with it, a lot of us have trouble committing actual violence. Or at least we used to.

    •••

    In 1995, four years before the mass shootings at Columbine, Dave Grossman, then a U.S. Army lieutenant colonel, pioneered a psychological field he dubbed “killology, the scholarly study of the destructive act.” He told me he thought the country was in denial over the extraordinarily harmful nature of consuming violence as entertainment. 

    Grossman, who was a professor of military science at Arkansas State University in Jonesboro (about 135 miles northeast of Little Rock) at the time, had just published his landmark book On Killing: The Psychological Cost of Learning to Kill in War and Society, a work that’s now considered a prime text — it’s required reading at the F.B.I. Academy in Quantico is on the curriculum at West Point. 

    Grossman, citing the work of Army historian Brigadier Gen. S.L.A. Marshall, alleged “that, during the Second World War, “ maybe twenty percent of the troops… fired their weapons.”

    This claim was controversial then and much of Marshall’s work has been discredited now, but the point is that it was accepted as fact by the U.S. military. Combat reticence posed a problem that they set about to solve. By the time of the Vietnam conflict, the individual firing rate had risen to over  ninety percent. Grossman says this was accomplished by “desensitizing and conditioning” soldiers to think of their enemy not as human beings but as “targets.” Soldiers were trained by shooting not at circular bull’s-eyes but at human figures that flopped over when hit. They were given a language of euphemism; they were not “killing” other human beings like themselves but “engaging an enemy target.”

    Much of the clinical, technical jargon of Vietnam was an intentional device to detach soldiers from the reality of what they were doing, to remove the emotional component of battle, to overcome the natural psychic resistance to Killing, and to bust the taboo. As Grossman points out, this desensitization had devastating results for returning Vietnam veterans. They came home bearing burdens of guilt, only to find that a large part of society condemned their actions. 

    “Now those same kinds of techniques that more than quadrupled the firing rate in Vietnam are at work in our society at large,” Grossman told me. “We are taking the same kind of individuals that the military found so malleable and subjecting them to the same kinds of desensitizing techniques.” 

    Grossman suggested movies and television programs that showed countless people gunned down were working to dissolve the natural disinclination to kill. And he is even more disturbed by the verisimilitude available via immersive, interactive video games. 

    “Video games are great things,” he said. “They allow us to learn all kinds of skills by mimicking and rehearsing, mimicking and rehearsing. Now we’ve got these games that are so real; you’re holding a weapon in your hand, and human forms pop up on the screen, and you’ve got a split-second to shoot them down. Bang. The gun rocks in your hand, your adrenalin is pumping, and the figure on the screen goes down, jerking, twitching, bleeding. 

    “And, on top of that, you’re scored on a point system. It is the exact model of operant conditioning.” And, it also works another way. 

    Grossman referenced the scene in A Clockwork Orange where Dr. Brodsky attempts to instill an aversion in the young thug Alex (Malcolm McDowell) through the fictional Ludovico Technique. Brodsky subjects Alex to where nausea-, paralysis- and fear-inducing drugs while his eyes are clamped open, receiving images of graphic violence and sex. It’s a classic Pavlovian kind of conditioning— he idea is he’ll forever associate violence with the bad feelings he’s experiencing. (One unintended consequence of the therapy is that Alex cannot enjoy classical music; Beethoven’s Fifth Symphony served as the soundtrack for the conditioning reel.)

    “Now, what we’re doing with these violent television shows and movies is reversing the process,” Grossman said. “When someone watches a vividly, horribly violent scene, they usually sit in a relaxed, enjoyable environment with their favorite soft drink. And what do audiences do when the prime bad guy, the one everyone agrees deserves to have horrible stuff happen to him, finally gets it? They cheer.”

    More than a quarter of a century ago, Grossman believed the country was in denial over the extraordinarily harmful nature of consuming violence as entertainment. 

    “We are reaching that stage of desensitization at which the inflicting of pain and suffering has become a source of entertainment: vicarious pleasure rather than revulsion,” he wrote in On Killing.  “We are learning to kill, and we are learning to like it.”

    In the end, onscreen violence on screen is just another mask we wear, a shadow-play that lets us glimpse our darker impulses without consequence. It’s a performance of the chaotic and tragic forces that flicker beneath our polished surfaces. T.S. Eliot once wrote, “Humankind cannot bear very much reality.” Perhaps that’s why we retreat into stories of blood and bullets, of outlaws and antiheroes—we’re trying to touch the real without being swallowed by it. 

    Yet when the boundary blurs, as it so often does, we’re reminded that the things we see, even on screen, aren’t so different from the things we do. We are, as much as we might like to deny it, made of the same stuff as our monsters.

    November 14, 2024
    crime, film, movie-review, movies, reviews

  • Somebody Else’s Son

    November 11, 2024

  • Love über alles




    A friend of mine, embedded in the political game, asked me for a quote for his Election Day Substack.

    Maybe he wanted a prediction—if so, I thwarted him. (After all, I’m the guy who confidently picked Arkansas over Ole Miss on the radio. So much for gut instincts.) Instead, I told him there had never been an American presidential election where the choice seemed so clear to so many, where so many people trembled at the possibility of defeat. I said this election would test the core assumptions of the American experiment; that we were about to find out what kind of people we really are, stripped of pretenses, facing ourselves without illusions.

    But looking back, I’m not sure I was right about that last part. I think most of us, deep down, know exactly what kind of people we are.

    We’re ordinary people, neither particularly blessed nor cursed, but shaped by the unique, turbulent circumstances of our history. We’re not greater than our ancestors or any other people across the globe. Maybe we’ve just been luckier, blessed by geography, a relative lack of invasion, and a wealth of resources. But no luck holds forever. And luck, as much as we like to believe otherwise, is rarely a marker of virtue.

    Some believe differently. I know there’s a significant number of folks who would argue I’m wrong, who believe America is uniquely favored by a God who loves us best and grants us special dispensations. That we’re not merely lucky, but blessed in a way other nations aren’t, and that God’s interest in our fortunes validates all our desires, even giving us dominion over the Earth and an inherent right to prosper, as if by divine appointment.

    This sort of American exceptionalism combines pride with a conviction that our values and history are uniquely moral, even divinely sanctioned. That’s how we got Manifest Destiny, the 19th-century doctrine that we were destined to expand across North America. And Christian nationalism, too. Some would argue this narrative served a purpose in its time, but today it’s an inherited belief that’s lost sight of the costs it exacted. In our self-assurance, we risk forgetting how close we are to repeating the mistakes of the past.

    True grace, I’d argue, isn’t something we can acquire through effort or national pride, nor does it justify our whims. Rather, grace is a gift that can help us transcend our natural pettiness, selfishness, and short-sightedness. Grace, if it’s present at all, leads to an inner transformation that flows outward, compelling us to live lives of compassion, generosity, and integrity. When someone exhibits genuine goodness—selflessness, empathy, a quiet willingness to serve—it can be perceived as a sign of grace working within them.

    The saying “By their works you will know them” comes to mind. I believe it’s from Matthew in the Bible. It means that one’s actions reveal more than words or outward appearances ever could. It’s by our deeds, not our lofty ideals, that we show our true nature.

    And that’s a standard that applies to nations, too. Beautiful words enshrined in a constitution don’t make a country great; doing the ongoing, often difficult work of living up to those promises might. Talk is cheap—and perhaps that’s in the Bible, too.

    To truly understand America, to see our true nature, we must look past our narratives and ideals to the actual work we do. A nation that talks about equality and freedom but permits suffering and inequity does itself no justice; its words are hollow unless its deeds reflect them. It’s easy to drape ourselves in high ideals, but true greatness demands that we question how those ideals play out in reality.

    One perspective on American history is that our nation was founded on conquest and slavery, its soil soaked in the suffering of the peoples who were already here and those brought here in chains. We often tell ourselves a story of liberty, justice, and equality, but the reality is more complicated. From the outset, the pursuit of freedom for some came at the price of profound subjugation for others. This legacy of domination isn’t merely a distant historical chapter we can close and set aside; it’s woven into the fabric of who we are, embedded in the land, and reflected in our institutions.

    Recognizing this foundation doesn’t diminish the ideals America professes; in fact, it should deepen our commitment to them. If we acknowledge our beginnings in conquest and slavery, we’re better able to understand our responsibilities today. True greatness isn’t achieved by burying our origins in myth but by facing them with honesty. This is the task of a mature nation: to reckon with its own contradictions and work tirelessly to bridge the gap between its ideals and its actions. To embrace liberty and justice for all, we must start by confronting the costs we imposed to claim those principles for ourselves.

    Now, I don’t mean to be unduly harsh on us. We are what we are, “the paragon of animals…in apprehension so like a god.” (Shakespeare, not the Bible). We’re creatures full of the usual instinctive drives and appetites, tempered by hints of something greater that compel us to build cities, ideals, and belief systems. We have endless imagination and ambition, yet remain afraid of the dark. That darkness takes different forms over time, sometimes as fear of the foreign or the unknown, other times as a desire to hold onto what is comfortable even when it’s time for change.

    It’s a tension that has driven human history—the remarkable blend of aspiration and limitation that characterizes us.

    But there’s another element at play: we tell ourselves lies about ourselves to get through the day. We convince ourselves of convenient truths to maintain our comfort. We like to believe that most people are kind—and that’s probably true when kindness is convenient. But history has shown us what happens when cruelty is sanctioned, when it’s dressed up as “tough love” or an unpleasant necessity. Only a few people, in any time or place, will refuse to be cruel when it’s licensed, and some actively enjoy it.

    Consider the employee who revels in disciplining others, the official who enjoys wielding power just a little too much, the internet troll who finds pleasure in tearing others down. It’s easy to find examples, from our daily interactions to the pages of history, of people who indulge cruelty when it feels justified or permitted. And maybe, just maybe, we’re all capable of slipping into it.

    We also like to think that most people are honest and hardworking. But are they? Management types love to repeat that twenty percent of a company’s employees end up doing eighty percent of the work. Think about your own workplace: are most of the people there genuinely hardworking? Are they honest? Or are they merely kept in line by security cameras, fear of sanction, or the dread of humiliation? How much of our “integrity” is circumstantial rather than intrinsic?

    We want to believe that people will, in most situations, act rationally—that they’ll balance self-interest with the common good and that we can reason with them. But the truth is that society’s cohesion depends less on the assumption that people are rational and more on the hope that enough of us will choose decency when it matters most. That takes an act of faith.

    That faith is shaken but not altogether lost.

    Still, you should understand the Nazis thought they were good people too. Many of them believed they were on a righteous path, serving the greater good of their nation and safeguarding their culture. They saw themselves as protectors of values, heroes of their own stories, and defenders of something noble. They thought they were on the right side of history.

    But history’s harsh light reveals otherwise: their sense of self-righteousness served as a cover for atrocities, allowing them to justify what should never be justified. This uncomfortable truth—that people can commit great harm while seeing themselves as “good”—is a reminder of the dangers inherent in unquestioned belief and moral certainty.

    We, too, risk falling into this trap when we tell ourselves that our actions, however aggressive or exclusionary, are justified by some larger purpose or higher calling. We must remember that the mere conviction of righteousness does not make one truly good. True goodness requires a humility that questions itself, a compassion that transcends ideology, and a willingness to look beyond tribal loyalties. Without these qualities, any group, any nation, can slip into darkness—even with the best of intentions.

    In reality, most people aren’t sociopaths or narcissistic monsters, nor are they saints, martyrs, or sages. Most of us are just ordinary, muddling through life, making compromises, rationalizations, and balancing self-interest with occasional flashes of empathy and a desire to belong. We act rationally when it aligns with our interests or fears, and even then, our rationality is filtered through biases, emotions, and limited perspectives. In the end, we’re simply human—nothing more, nothing less.

    Believing in the basic goodness of ordinary people is reassuring. It allows us to live with less vigilance and to view ourselves as part of a benevolent social fabric. I believe most people are decent enough if you take the time to know them. That’s why the cure for prejudice is travel, and the key to understanding is curiosity and connection. When we look closely, the differences between “us” and “them” dissolve, and we begin to see ourselves in others.

    And yet, ordinary people tend to form tribes; that’s an instinct left over from our lizard-brain days. We want our tribe to be the biggest, best, and greatest because it’s cooler, smarter, and better-looking than the rest—because we’re in it. That’s the human condition, the default setting we have to resist if we’re ever to form the greatest tribe, which, paradoxically, would be the most inclusive tribe. A tribe others would look to with admiration, not because of its exclusivity but because of its expansiveness and generosity.

    A tribe that would be known, not for its power, but for its love. (By our love, by our love, they will know we are Americans by our love.)

    We are ordinary people, and there is dignity in that. It’s mostly ordinary people who accomplish extraordinary things, who understand that achievement takes hard work, faith, and resilience. That true success requires striving through disappointments, doubts, and setbacks. That we must fail and fail again until we fail a little better. The Cincinnatuses and Caesars, saints and sinners, who make up our history all come from the ranks of ordinary people.

    History isn’t a prophecy waiting to be fulfilled; it’s something we create together. Life is far from fair, but in the end, people tend to produce the kind of culture they truly want, one that reflects their values and aspirations. We are what we do, not what we say, and we are not made better by excusing our own faults while holding others to high standards.

    So when people say they want to “make America great again,” I wonder if they understand that greatness should be a challenge, not a taunt. It should be a call to reach higher, not a bumper sticker boast permitting us to look down on others. Making America great is no small task. It’s a collective effort that requires integrity, hard work, and an unflinching commitment to our highest ideals.

    If greatness is to mean anything, let it mean this: that we, ordinary people—flawed, human, and full of potential—choose to be known by our love and bound by our works. That we rise above the easy comfort of self-deception to face the challenge of the America that could be. Let us earn our place in history not by claiming it but by proving it, day after day, in the quiet, often unseen acts of decency and compassion that make greatness possible.

    November 10, 2024
    love, politics, Trumpism

Previous Page Next Page

Blog at WordPress.com.

 

Loading Comments...
 

    • Subscribe Subscribed
      • Where Kael Now Need?
      • Already have a WordPress.com account? Log in now.
      • Where Kael Now Need?
      • Subscribe Subscribed
      • Sign up
      • Log in
      • Report this content
      • View site in Reader
      • Manage subscriptions
      • Collapse this bar