top of page

Grace & Gigabytes Blog

Perspectives on leadership, learning, and technology for a time of rapid change

Ryan's book cover.jpg

Can you summarize yourself to smarts?


That seems to be the quiet bet underlying so much of our current relationship with AI. Last month, a friend told me, with a mix of pride and excitement, that he had stopped reading books and listening to podcasts. Instead, he takes the books he would have read and the conversations he would have listened to and runs them through ChatGPT, generating summaries, extracting core ideas and cutting out the rest. What he gains, in his view, is efficiency and clarity, the ability to get the value of a piece of content in a fraction of the time it would have taken to engage it directly. Spend less time on more content, he argues, and you'll get smarter, and more productive.


This instinct is not isolated. It is showing up in high school literature classrooms where students are increasingly turning to AI-generated summaries instead of reading Shakespeare or Twain, and it is present in the workplace where meeting recaps, call summaries, and condensed reading lists have become standard artifacts of daily work.


To be fair, we have always had versions of this. SparkNotes existed well before AI, and executive summaries have long been part of how we navigate complexity and abundance. Even the best teachers help students step back from a text to understand its structure and themes without requiring them to master every line.


But something important has shifted since the beginning of the AI boom. Even SparkNotes, for all its shortcuts, was still grounded in literacy. Its summaries were, generally speaking, accurate. But if you ask an LLM to summarize a specific chapter of Jane Austen, I'll bet you a bitcoin it makes a mistake. SparkNotes also published in sentences and carried a point of view, however simplified. It asked the reader to remain, at least in some small way, inside the act of reading. What we are seeing now is a move toward something more "frictionless," where the goal is not simply to clarify but to compress, not just to guide engagement, but to eliminate the need for engagement altogether. The summary is no longer a companion to the work of understanding an idea. It is becoming a substitute for it. And I wonder, if we eliminate the effort that leads to understanding, what else do we have?



I find myself less concerned about what this does to Shakespeare, who has endured centuries of reinterpretation and reduction, and more concerned about what it does to us, especially in the ordinary and unfinished parts of our lives where understanding is actually formed.


Life does not unfold in cleanly extracted insights, bullets, emojis, or em dashes. It takes shape in the hesitations and repetitions that fill real conversations, in the half-formed and run-on sentences that circle an idea before finally arriving at something true, and in the long, meandering explanations that could have been shorter but would have lost their meaning if they were. Some remark that a meeting could have been an email. But maybe the monotony of the meeting was just the thing needed to spark clarity?


When we listen to a podcast guest think out loud, changing their mind as they speak and fumbling for the right words, we are not simply collecting information. We are learning how to inhabit complexity, how to hold uncertainty, and how to think in real time. Why was President Obama an effective orator? Rhetoric, to be sure. But I would also suggest that his comfort expressing "umms" and hanging on to long pauses helped him to refine and polish his core ideas, shaping and sharing his vision simultaneously.


Awkward, grammatically incorrect communication is central to everyday interactions. When a colleague explains something imperfectly, or when a friend sends a message that is grammatically uneven but emotionally clear, we are reminded that communication is not a finished product but a shared process of meaning-making. These moments carry a kind of formative weight that is difficult to measure but essential to how we learn. When we sand down the "errs and ums" in the name of efficiency, we are not just removing excess. We are removing the conversational conditions under which understanding deepens. This really ought to give pause to those who work in ministry or faith formation. Where else can faith take root besides the fragmentation and messiness of lived experience?


There is a deeper cost here that is easy to overlook because it does not announce itself as a loss. When we outsource the middle of an experience, the wandering, the wrestling, and the repetition, when we erase the grammatical gaffes, we begin to lose something more subtle than information. We lose formation. Reading a book is not only about empathizing with the author’s conclusion but about undergoing the argument itself, feeling its tension, sitting with its ambiguity, and allowing it to work on us over time. Listening to a full conversation trains a kind of patience and attention that cannot be replicated by a summary. It teaches us to remain with ideas that do not resolve quickly and to resist the impulse to move immediately to closure.


Summaries, by design, collapse time. They promise arrival without journey and clarity without the discomfort of complexity. Over time, the "cheap grace" of a summary begins to shape us in ways that are disadvantageous. We become less practiced in sustained attention and less comfortable with ideas that require time to unfold. We grow accustomed to resolution and begin to lose our tolerance for ambiguity, not because we have consciously rejected it, but because we have slowly optimized for something else.


This is why I think of the AI summary as a kind of soft tyranny. There is nothing coercive about it. No one is forcing us to summarize our way through the world. It arrives as a gift, as a tool that saves time and reduces effort, and in many ways it delivers on that promise. But it also quietly narrows the range of experiences we are willing to engage. When something cannot be easily summarized, we are less inclined to give it our attention. When an idea takes time to develop, we are more likely to move past it. When a conversation wanders, we feel the pull to compress it into something cleaner and more efficient.


Over time, almost without noticing, we begin to prefer the summarized version of reality. We start to expect clarity without process and insight without effort, and we shape our habits accordingly. The danger is not that we will stop learning, but that we will begin to relate to learning itself in a thinner way, prioritizing extraction over experience, conclusion over formation, the answer over the question.


I do not think the answer is to reject summaries altogether. They have an important role to play in helping us navigate a world that is oversaturated with information. Summaries and redactions can orient us, clarify what matters, and make complex material more accessible. But they should not replace the full version. We still need long books and unedited conversations. We need amateurish podcasts with hesitant guests. We need the Director's Cut of Lord of the Rings. We still need messy drafts and uncertain thinking. We need spaces where ideas are not yet distilled and where understanding is still in the process of becoming.


The question is not whether AI can summarize the world for us, because it increasingly can. The question is whether we want to live in a world where everything meaningful has already been reduced to its most efficient form, or whether we are willing to remain in the parts of experience that resist summary. Those are often the places where understanding takes root, where insight is not delivered but discovered, and where we are shaped not just by what we learn but by how we come to learn it.

 
 
 

The anxiety beneath the AI boom is palpable. High performers (and high earners) in fields like software development, data analytics, and cyber-security are reading about rapid gains in AI capability, asking whether their job will be next. Some have responded with existential dread. Others have sought to control the situation by adding more technical skills, trying to chart a learning curve ahead of the LLMs. But what if the real risk in the AI boom isn’t falling behind? What if the real risk is becoming too replicable, too easy to re-create? 3.5 years into the AI explosion, it’s increasingly clear that AI doesn’t eliminate all, or even very many, jobs. Rather, AI isolates and exploits aspects of human labor that are easily “programmable.” Enter the Theologian as the AI-Proof professional, and Theology as a quintessentially irreplaceable skill.


Prior to 2022, career security, and affluence, almost necessitated the learning of scarce, complex technical skills like coding. In the age of AI, technical skills are increasingly automatable. See it for yourself. In under an hour, you can create an app of your choice via tools like Claude or Lovable. What once took years now takes seconds. When bots become technical, technical skills are no longer scarce. The rarified skillset of the AI age, the one that is truly impossible to automate, are the skills of judgment and meaning-making. While these skills might not be trained in Silicon Valley, they remain the bedrock of theology. 



As Cade Metz recently wrote in the New York Times, AI is exceptionally strong in narrow, highly structured domains. It possesses remarkable, yet “jagged” intelligence. It is surprisingly weak, however, in situations that are ambiguous, where the context shifts, where moral reasoning is necessary. Can ChatGPT solve complex math? Yes. Can it navigate real-world decisions that one might characterize as “judgment call?” Ask your chatbot to solve your next workplace dispute or standoff. I’ll wager you a bitcoin it won’t help one bit (or byte). 


Within this AI economy, jobs aren’t replaced wholesale. They are fragmented. Every role becomes a mix of automatable tasks, completed alongside a chatbot, and remarkably humanr responsibilities. The question is no longer “Will AI take my job,” but “In which parts of my work protected from AI’s jagged edges?” 


Thus, as AI handles structured tasks, economic, and vocational value, concentrates in areas with low feedback, high ambiguity, and true human consequence. Moreover, it becomes crucial to decide when AI is wrong, to interpret outputs critically, and to take accountability for the outcomes. AI can generate answers. We still have to choose which course to take. 

The time has come for the theologian. Theology, a discipline of wrestling with the sacred from a very human vantage point, involves tasks that no LLM can replicate. To theologize is to read complex and ancient texts across time and context, to accept that they have multiple meanings, and to construct a message or narrative that is resonant and relevant. 

To do the work of the theologian is to navigate ambiguous situations, responding with an articulation of what is faithful, reasonable, and conscientious. The chatbot follows scripted rules. The theologian forms a coherent view point that informs leadership decisions, ethical stances, and organizational culture. 


This is not to say that all programmers should become pastors or that all data scientists should study divinity. But it might be helpful for those fearing the jagged edges of AI to recognize how the discipline of theology, of interpreting meaning, is applicable more than ever. Presented with ambiguity, how might we account for the influence of tradition? How might we draw upon that which is authoritative? Where should we look for meaning, for purpose? We might not always bring up God when we bring up Google, but surely the theological task has newfound relevance for those in sales, marketing, product management, and many more “anthropological” fields. 


But were all programmers to become more pastoral, we might be in a better place societally. To think theologically is to consider what it means to be a minister. As Bonhoeffer would suggest, theology is inseparable from ministry. So our argument would be incomplete if we were to recommend that the AI-at-risk in our society only learn from the heady side of theology. Pastoral ministry involves attending to people navigating uncertainty, grief, and conflict. To be a pastor is to listen, to attend to difficulty in a way that is relational, rather than transactional. Who among us isn’t navigating uncertainty, grief, and conflict? Who among us wouldn’t appreciate the support of a non-transactional, non-anxious trustworthy person? The tech manager may not preach on the Gospel. But the pastoral skillset is increasingly important to the managerial class. 


Theology alone is seldom a career path. I’m not encouraging a generation of AI-displaced workers to enroll at divinity school (though some should give that serious consideration). Instead, the practice of theology will become a force multiplier for technical skill. The question isn’t “Does theology get you hired?” It’s “What kind of judgment shows up once you are?”


My advice to the class about to graduate is to learn a technical skill. To learn to work alongside AI. But to take a critical look at what these tools do, and how they are forming us. And at the same time, to read great texts and engage great works of art that cultivate patience, focus, and empathy. To put oneself in situations where ambiguity is a given and the next step isn’t obvious. AI fluency and the human formation form a powerful pair for the road ahead. Tools, paired with telos, will AI-proof your career. 


The future of work will not be evenly automated. It will be uneven, unpredictable, and responsibility-heavy. That puts a premium on judgment, presence, and meaning-making. The safest careers are found where AI keeps coming up short.



 
 
 
DSC_0145.jpg
@ryanpanzer

Leadership developer for digital culture. Author of "Grace and Gigabytes" and "The Holy and the Hybrid," now available wherever books are sold.

bottom of page