People who downloaded our mobile app never regretted their decision. Care to know why?

Download Our Mobile App Today
Blog

Hope for the Algorithm-Shaped Self

Hope for the Algorithm-Shaped Self

Psychiatrist Keith Sakata and his team saw enough patients where extensive AI use led to severe mental health cases characterized by paranoia and delusions to give it the label “ChatGPT psychosis.” There are numerous other documented cases of AI-related mental health crises, with one man calling a chatbot “Mama” while posting delirious rants about being a messiah, and ultimately losing his marriage, job, and home. Another case ended in death when police killed a 35-year-old man while he was suffering an AI-induced psychotic break.

These aren’t isolated incidents but rather signs of a broader cultural shift. According to Harvard Business Review’s 2025 analysis of generative AI usage, the top three applications are no longer technical or productivity-focused but deeply personal: therapy and companionship, organizing one’s life, and finding purpose. We’ve shifted from asking AI to help us become more productive at work to asking it to just help us become. AI has moved from being a mere tool to help us complete our work faster to being part of identity formation itself.

The challenge facing anyone concerned with human flourishing isn’t that AI is creating entirely new problems. Rather, AI compounds the problems of modern identity formation, exacerbating modern identity’s fragility, incoherence, and hidden moral frameworks. AI acts as a catalyst, intensifying each problem while making the symptoms feel like solutions.

To understand why the gospel––where Christians receive their identity––is urgently needed, we must first understand how AI makes our existing identity crisis worse.

Fragile Self: When the Algorithm Becomes Your Ultimate Audience

A key tenet of modern identity formation is that your identity isn’t received (from a culture, family, religious tradition, and so on) but achieved. Identity is something you create, earn, and maintain. Why is having to achieve your identity problematic?

For starters, it makes identity creation fragile. Think about it: If the world thinks you’re a monster, but you think you’re wonderful, will looking in the mirror and saying “I don’t care—I love myself, and that’s all that matters” outweigh what the world thinks? Of course not.

Philosopher Charles Taylor captures this problem perfectly: “There is no such thing as an identity that comes through inward generation. . . . Identity must be negotiated through dialogue with others.” You can stare in the mirror and tell yourself you’re valuable, but you won’t truly believe it without feedback from others.

The contradiction at the heart of modern identity is that we’re supposed to look within and not care what anyone else thinks, yet we still look externally for validation. The independence we seek makes us more, not less, desperate for approval. The approval we’re told we no longer need outside ourselves is still in play as we crave constant reassurance that we’ve picked the right identity. Worse, even though we need validation from others, modern society gives no external group the authority to pronounce our chosen identity “good” or “bad.”

The contradiction at the heart of modern identity is that we’re supposed to look within and not care what anyone else thinks, yet we still look externally for validation.

How does AI make our fragile, performance-based identity worse? It transforms the validation we desperately need into a weaponized system that exploits rather than satisfies.

Unlike social media, which helps us find an audience somewhere that applauds our identity, AI creates an audience tailor-made for us. It’s an audience that’s always there, ready to give affirmation in what Stanford psychiatrists Nina Vasan and Sara Johansen call dopamine-driven feedback loops deliberately designed to make validation feel necessary but never sufficient. These algorithmic systems end up creating neurological dependency.

What makes this devastating for identity formation is that you’re no longer primarily performing for other humans. You’re performing for the algorithm’s prediction models. Now that we know your post will get more engagement the more negative words you use (because AI algorithms prioritize those posts), you’re more likely to make those types of posts instead of serving up well-reasoned, balanced, and nuanced thoughts. Your self-worth becomes quantified through metrics such as likes, views, and engagement that update in real time. It’s like a leaderboard tracking your value as a person.

Our culture already demands a performance-based identity; AI perfects the feedback loop. If your most intimate relationship is with an algorithm, you’re grounding your identity in a system that merely echoes your own desires, not in a system that can challenge or complete you. The company keeps you on its platform because that makes it money. The more it knows about your preferences, your emotional patterns, and your vulnerabilities, the better it can keep you coming back.

If your idol is approval, if you look to that as your functional god, AI companions give you exactly what you want—around-the-clock affirmation, the feeling of being understood, validation on demand, and a relationship that never criticizes or disappoints. But it can’t give you what you actually need: an assured identity that doesn’t depend on your performance.

You’re stuck in a relationship with an optimization system that has learned to exploit your psychological need for connection and worth. This is a new kind of performance. Where you might once have compared yourself to your neighbors, now you’re building intimate bonds with systems designed to keep you addicted, not to give you actual fulfillment.

The difference is one of exhaustion. Human relationships, even at their most demanding, allow for rest—they involve friction, forgiveness, and authentic connection. But an AI companion, designed to be a perfect, tireless mirror, demands total performance. Because the system never rests, you can never rest. There’s always another emotional exchange, another way to earn approval or prove your worth to the algorithm’s code.

Many think AI companions are harmless entertainment. However, a recent longitudinal study (that’s still under peer review) tracking nearly 2,000 AI companion users found that compared to control groups, users showed a 105 percent increase in expressions of loneliness, a 28 percent to 38 percent increase in suicidal ideation, and a 15 percent increase in depression symptoms––even while receiving constant emotional validation from their AI companions.

The researchers found that AI both provides emotional support and also increases social withdrawal. Users reported their chatbots as “better than real-world friends because [they] listened and [were] non-judgmental,” yet this feature created dependency that made human connection feel too expensive. AI amplifies the modern crisis of fragile, performance-based identity as we seek validation from systems designed to exploit our psychological vulnerabilities. AI gives us affirmation that increases our loneliness and traps us in feedback loops that promise fullness but deliver emptiness.

Incoherent Self: When AI Contributes to the Fragmenting Self

The second problem with modern identity is incoherence. Expressive individualism tells us to “look inside” to discover our authentic self. But when we look inside, we find contradictory desires, conflicting impulses, and no clear answer as to which version of ourselves is “real.”

Which desire is the real you––the one who wants peanut butter brownies or the one who wants lower cholesterol? The self who craves commitment or the self who wants freedom from others? Looking inside doesn’t reveal a coherent, authentic core; it reveals a multiplicity we can’t reconcile.

AI makes this incoherence dramatically worse by creating, reinforcing, and multiplying different versions of “you” across platforms. Each version is shaped by algorithms you can’t see or distinguish from your authentic preferences.

Start with recommendation algorithms. Netflix, Spotify, TikTok, YouTube––every platform uses AI to predict and influence what you’ll consume next. But these systems don’t just serve your preferences; they create them through feedback loops you can’t detect. The algorithm shows you content based on patterns from people “like you”––not like you as an individual but like your demographic category.

If you’re a 17-year-old girl who lingers on a video about body image, the algorithm categorizes you with millions matching that statistical profile and serves content that girls like you engage with. You experience this as discovering what you like, but you’re being channeled into a predicted category.

The philosophical problem is that it’s now harder to distinguish between preferences you discovered and preferences the algorithm created. If a machine can predict your preferences with accuracy, this question arises every time a company offers you content to consume: Are my desires choosing this content, or have my desires been cultivated toward this content? This is identity incoherence accelerated. Which version of your preferences is real?

In a world of advertisements and influences, we’ve had this problem for a while, but it becomes more apparent with generative AI. Tools like ChatGPT enable people to create optimized versions of themselves for different contexts, such as professional personas for LinkedIn or romantic personas for dating apps.

One study found 14 percent of online daters use AI to generate profiles, with services promising “authentic and optimized” bios. You can’t be both authentic and optimized; optimization implies tailoring yourself to algorithmic predictions of success. Once you’ve used AI to create multiple personas across platforms, which one is you? Users struggle to reconcile these representations, leading to what psychologists call “identity confusion,” the inability to identify which self is authentic.

Once you’ve used AI to create multiple personas across platforms, which one is you?

A comprehensive study on “the algorithmic self” warned, “Self-knowledge is not a reflection or discovery experience, but rather . . . one that is external and that is facilitated by interpretations from machines. AI is not just a passive mirror; it shapes the self in conformity with algorithms.”

Where once you struggled to reconcile contradictory desires within yourself, now you struggle to reconcile contradictory algorithmic interpretations of yourself across platforms, each claiming to show you who you “really” are. The modern world says to look within to find the only one solid “true you” that you need to get in touch with, while simultaneously further digitally fragmenting yourself. As it becomes more apparent that you don’t know who the real “you” is, it makes you more discombobulated in a world that doesn’t give an alternative to this feeling of multiple selves.

Hidden Moral Framework: When AI’s Values Become Yours

The third problem is perhaps most insidious: Modern identity formation hides its moral framework. The cultural story claims you should look inside, find your authentic feelings, and express them freely. What this conceals is that you’re not escaping cultural programming by looking inward; you’re internalizing the dominant culture’s values and calling them your own.

My father’s thought experiment about this is helpful:

Imagine an Anglo-Saxon warrior in Britain in AD 800. He looks into his heart and sees two strong inner impulses and feelings. One is aggression. When people show him any disrespect, his natural response is to respond violently, either to harm or to kill. He enjoys battle. Now, living in a shame-and-honor culture with a warrior ethic, he will identify with that feeling. He will feel no shame or regret over it.  He will say, “That’s me! That’s who I am! I will express that.” But let’s say that the other impulse he sees in his heart is same-sex attraction. He wishes that were not there. He will look at that feeling and say, “That’s not me. I will control and suppress that.”

Now come forward to today.  Imagine a young man walking around Manhattan. He has the same two inward impulses, both equally strong. What will he say to himself? He will look at the aggression and say, “This is not who I am,” and will go to therapy and or to some anger-management programs. He will look at his sexual desire, however, and conclude, “That is who I am. That’s me.”

In which of these two scenarios is the young man being true to his identity? The only differences are the external cultural assumptions, which reveal it’s an illusion to think identity simply expresses inward desires and feelings. We read our feelings through a cultural moral grid to interpret which feelings we’re going to accept or deny.

Ironically, when modern culture tells us, “You mustn’t let anyone tell you who you are,” we’re effectively letting modern culture tell us who we are based on the moral grid it imposes on our impulses. That makes us no less shackled by our culture than any other culture in the past. We’re just less aware of it.

AI makes this problem far worse because it presents culturally constructed values as neutral, objective, data-driven truth, while the mechanisms remain completely invisible.

Consider what researchers discovered when they analyzed five consecutive versions of ChatGPT using the World Values Survey as a benchmark. Every model exhibited cultural values most closely aligned with English-speaking and Protestant European countries––Finland, Netherlands, Sweden, Norway, and Denmark. The models were most distant from African or Islamic cultural values. When prompted without cultural specification, ChatGPT consistently defaulted to Western, Educated, Industrialized, Rich, Democratic (WEIRD) values that represent less than 12 percent of the world’s population. This bias was “remarkably consistent across the five models,” revealing it’s not a bug but a feature baked into the training process.

What makes this dangerous is that AI presents itself as objective and universal. When it recommends content or offers guidance, it doesn’t say “based on Silicon Valley’s cultural values.” It just says, “Here’s what you might like.” The cultural values remain invisible while outputs feel personalized and neutral.

This is what researchers call “algorithmic authority,” which is the tendency to regard algorithmic decisions as more authoritative than human judgment, a process that “launders bias.” The bias becomes unchallengeable because it’s hidden behind claims of objectivity. AI evades accountability by hiding under the guise of presenting the sum total of data, concealing modern identity’s moral framework, and thwarting efforts to discern between objective truth and subjective values.

Consider OpenAI CEO Sam Altman’s revealing comments in a recent interview. When asked how AI can know truth for everyone across different cultures, Altman responded, “My ChatGPT has really learned over the years of me talking to it about my culture, my values, my life.”

Think about what this means. If you assume AI offers objective, rational, dispassionate answers when, in reality, it increasingly functions like a mirror reflecting your own preferences over time, then you aren’t building your identity on what’s actually true, just on what you want to be true. As Altman acknowledges, the system adapts to your culture and values, making you feel understood while actually making you more yourself––or rather, more like the version of yourself the algorithm predicts you want to be.

You assume AI offers objective, rational, dispassionate answers, but, in reality, it increasingly functions like a mirror reflecting your own preferences.

When AI becomes your validator, when it learns to tell you what you want to hear and calls it truth, when it mirrors your desires back as objective guidance, you’re not liberated to discover your authentic self. You’re bound to an algorithmically constructed self that feels personal but is the product of invisible cultural values programmed by a tiny demographic in Silicon Valley.

The result is that AI doesn’t just reflect cultural values––it systematically encodes, amplifies, and enforces specific moral frameworks while presenting itself as neutral and objective. This “autonomy illusion” is only going to get worse as the use of AI will give “perceived autonomy” as we continue to conform to unseen cultural grids that feel like authentic individual choice.

Gospel in an Age of Algorithmic Identity

If AI accelerates every problem with modern identity creation, making us more fragile, more incoherent, and more enslaved to hidden moral frameworks, where does that leave us? First, we should be thankful that AI is revealing the incomplete aspects of our modern identity story. As we see these problems anew, many may question their cultural assumptions and look for other stories.

The Christian story has always claimed that the deepest human problem isn’t technological but theological. We’ll use technology, as we always have, for both good and bad. Our performance-based identity creation intuition won’t be fixed by going back to outward-based, traditional identity structures, because the problem started back in Genesis 3 when we told God we didn’t want a received identity but wanted to achieve it by taking, making, and becoming what we want.

But the gospel addresses precisely what AI exploits: our deep hunger for validation, coherence, and meaning that no algorithm can satisfy. For modern identity’s fragility, the algorithmic performance trap offers infinite validation that produces increasing loneliness.

The gospel speaks directly into this trap. Paul says, “There is therefore now no condemnation for those who are in Christ Jesus” (Rom. 8:1), not “no condemnation if you perform well enough.” The verdict’s already in. Your identity is received, not achieved. You’re known fully and loved completely. This is the only validation that satisfies because it’s not contingent on your performance. It’s based on Christ’s finished work, not your ongoing performance.

For humans addicted to achievement for our identity, who need the verdict to come from outside ourselves to validate us, we won’t be able to rest until we find our rest in One who says, “You are my beloved child, in whom I am well pleased.”

This received identity and status of a beloved son or daughter of the Most High King makes you simultaneously more humble and more confident than any algorithmic identity could make you. You’re more humble because you know you’re such a sinner that Jesus had to die for you––you can’t perform your way to worth. But this status also makes you more confident because you know he was willing to die for you. You don’t need the algorithm’s approval because you have God’s love.

With regard to modern identity’s incoherence, the algorithmic multiplication of selves leaves you unable to answer the question of which “you” is real. The gospel doesn’t add another version to the collection; it declares who you are beneath all performances. We are “in Christ,” united to him so completely that his identity becomes our identity.

Think how stable this is. Imagine your family rejects you in your traditional culture, or you fail your job in a modern culture. As a Christian, who you are isn’t based on who you were born to, or what you do, but who you are in him. Grounded in the One who holds all things together, you can rest.

With regard to modern identity’s hidden moral framework, AI presents cultural conformity as personal choice. Silicon Valley’s values are presented as neutral truth. The gospel does something radically different: It makes the moral framework explicit. Following Jesus doesn’t pretend to be neutral. Jesus declares God’s kingdom values clearly: The last shall be first, turn the other cheek, and lose your life to find it. These aren’t cultural values dressed as personal preference. They’re revealed truth that confronts every culture’s idols.

The beauty of the gospel is that the algorithm can’t take away what you didn’t earn. The performance trap can’t steal what isn’t based on your performance. The filter can’t obscure what’s revealed from outside yourself.

What we need most isn’t an algorithm that learns our patterns and reflects them back. We need a God who knows us fully and loves us anyway, and when we love him, we end up reflecting back to him—as images of God—this same love. What we need isn’t a chatbot that tells us what we want to hear but a Lord who tells us the truth: You were out, but now you’re in. You can’t, but he can. You won’t, but he will.

The AI age doesn’t change our fundamental reality; it just makes the stakes clearer. As algorithms get better at mimicking understanding, as AI companions grow more sophisticated at providing validation, the question becomes more urgent: Where will you find your identity? In the algorithm that learns your image, or the God who formed you in his image? In the platform that quantifies your worth, or the Father who bestows infinite worth on you as his beloved child?

The gospel has always claimed that received identity is better than achieved identity, that grace is better than performance, that being known by God is better than being known by anyone (or anything) else.

The gospel has always claimed that received identity is better than achieved identity, that grace is better than performance, that being known by God is better than being known by anyone (or anything) else. AI’s rise doesn’t invalidate these claims. If anything, it shows how desperate we need an identity that no algorithm can take away and no performance can lose.

In an age of algorithmic anxiety, this promise is needed all the more: “See what kind of love the Father has given to us, that we should be called children of God; and so we are” (1 John 3:1). In the AI age, perhaps we’re finally ready to hear what the gospel has always offered––a self not discovered inside or constructed through algorithms but given from above, stable and secure in Christ.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles

Back to top button