People who downloaded our mobile app never regretted their decision. Care to know why?

Download Our Mobile App Today
Blog

‘Not a Child-Safe Technology’: Proactive Parenting in the Age of AI

‘Not a Child-Safe Technology’: Proactive Parenting in the Age of AI

You don’t have to read many headlines to realize who the media thinks should be worrying the most about AI. It’s the white-collar employees who are all going to lose their jobs—everyone from accountants to historians, writers, graphic designers, coders, and administrative assistants. It’s also their CEOs, who needed to get on board yesterday and who are by now so far behind they’ll probably be fired. And it’s the young workers, who will never get hired because AI can do their entry-level jobs better and cheaper than they can.

If somehow you escaped that gamut of anxiety, the headlines will let you know all our stock-market money is heavily invested in an enormous AI bubble that will crash any minute now, plunging the entire economy into a recession.

All that to say, the most public hand-wringing about AI is directed at the economy.

I’m not sure we’re looking the right way.

To be fair, I do think it’s worth thinking deeply and well about how AI is going to change work. Those are important conversations. But if we’ve learned anything from our last two technological waves—smartphones and social media—then it stands to reason that those who are going to be most adversely affected by AI probably aren’t yet old enough to drive.

“We should have been better about realizing what we were exposing our children to with smartphones and with social media,” said Stephanie Formenti, vice president for student development at Covenant College. “I hear a lot of parents talking about that and trying to back up a little bit around social media use and these things. But then there’s no conversation about ChatGPT. So I think it’s another moment of: Are we gonna pay attention?”

If I had to answer her question, I’d say, “No, in general, we’re probably not.” Studies show nearly 30 percent of parents have no idea if their teen has ever used a chatbot. They underestimate their kids’ use of AI. And four in ten have never talked to their kids about chatbots.

But I am going to pay attention. I’m going to ask some questions and try to sort out the legitimate concerns, apply biblical truth, and gather some practical wisdom.

And if you’re up for it, I’m going to take you along with me.

What Should We Worry About?

“As a general rule, I would say you should only worry about things that you can do something about,” said Brad Littlejohn, a program director and policy advisor at American Compass and policy chair at Alliance for a Better Future.

A few months ago, Brad wrote an article identifying seven major concerns about AI, everything from the ethical (How humanely should we treat AI?) to existential (What if AI blows up the world or puts us all in the Matrix?).

‘Not a Child-Safe Technology’: Proactive Parenting in the Age of AI
Brad Littlejohn and his wife, Rachel, with their four children / Courtesy of Brad Littlejohn

When I asked him what parents should be worried about for their kids, he mentioned three things—education, emotions, and employment. These are areas where AI is already affecting children or young people, and, importantly, areas where parents have the agency and ability to make wise decisions that can significantly affect the mental, emotional, and spiritual health of their kids.

Let’s take them one at a time, starting with education.

Educational Risks

“I have a bit of a handle on social media because I know what Instagram and Facebook are,” said Emily Shuler, mom to four kids. “But it feels like AI is a bit different in the sense that I don’t even know what it is and isn’t a part of, and where it can and can’t be accessed, or how far it infiltrates a social media app or whatever. So it’s like a level down from [social media] and more integrated into so much of what they’re doing.”

Emily’s oldest, a daughter, is in eighth grade this year.

“I know my daughter has to sign a school contract—ours was called the honor code when I was in high school—saying that she will not use AI,” she said. “And I also know that she’s using AI. I mean, if you’re searching for anything on your computer, that’s the first answer you’re getting.”

And if your child is enrolled in a school in America, they almost certainly have a computer. In an informal study by the New York Times last year, 99 percent of teachers said their school provided devices for students to use in class. For more than 80 percent, this begins in kindergarten.

In many schools, each student even has their own device to use all day long. This is a relatively recent shift. Before 2020, only about a third of teachers said each child in their classroom was assigned their own device. That’s now up to 80 percent.

That’s a lot of screen time for kids, and studies show it hasn’t been education’s greatest move: There is a direct and significant correlation between increased screens in the classroom, sliding test scores, and rising loneliness in schoolchildren.

All those Chromebooks are also direct conduits for AI, which is basically a computer’s ability to do a task typically associated with human intelligence, like learning, reasoning, or problem-solving.

You can probably see potential problems with this right off the bat. If the AI is doing the learning, reasoning, and problem-solving, what’s the child doing?

“AI becomes a way of passing everything—high school, college, writing papers,” said Julie Lowe, a biblical counselor who specializes in family and children’s issues. “[Students] don’t have to think anymore. They get AI to do it for them. And if teachers have an AI grid to filter things through so they know if you’re cheating, then they’ll find an app that tells them how to get around that. . . . And so young people are going to graduate high school who don’t know how to think, don’t know how to write a paper, don’t know how to do homework.”

Julie’s youngest son is in high school. He tells her, “Mom, I’m sitting in class, we’re all on our iPads, and I see people cheating. I see them taking a snapshot of a math problem and just letting AI do it for them, and they’re getting 100s on the test. And I’m getting 80s because I’m not cheating.”

Educational Research

The problem here is threefold. The first is obviously integrity. A recent Pew study found 60 percent of 13-oto-17-year-olds know people who use AI to cheat at schoolwork sometimes.

The temptation is almost impossible to resist, especially if you’re in a school that allows some AI use. For example, if your teacher says you can use AI to brainstorm, research, and edit, can you also use it to suggest a synonym? Rework a sentence? Write a paragraph?

And what about math? Forty-three percent of students have asked AI to solve a math problem for them. How many then reworked the problem until they understood the logic behind it?

“I don’t know what you do when you’re a parent with a child who is asked by the school to use their computer for all things school-related—we’re not even doing math problems on paper,” Emily said. “We’re doing them on the computer and submitting them electronically. And at the same time, they’re saying, ‘You may not use these resources that are available to you.’”

emily shuler scaled MyCelestialApp
Emily Shuler and her children at Easter / Courtesy of Emily Shuler

Those warnings aren’t working. Among students who have used a chatbot for homework, a significantly higher number—about three-quarters—say they know other students have used it to cheat. Conversely, those who haven’t used a chatbot for homework, perhaps because their school doesn’t allow it, are less likely to know someone who cheats.

But honesty is only one problem. The second is that AI isn’t quite like a Google search, which will supply a list of sources where you can find information. Instead, AI gathers and summarizes everything for you, presenting the material more like a textbook would.

This means that the burden of reading and understanding those sources lies on AI, not the child. It also means biases and errors of the AI model are presented as clear, unbiased truth. This is why textbooks are thoroughly edited by teams of people, and why school administrators are so careful in choosing their curriculum.

AI is like a first draft of curriculum—before anyone checked the sources, corrected the factual errors, or edited the bias.

One way students can respond is to believe everything AI tells them. But given what’s already happening, Brad doesn’t think that will be a problem for long.

“That’s less likely to happen, because they’ll be very attuned to this,” he said. “They’ll have a different problem—they’ll become complete relativists. In a world where everyone is lying to you, then you don’t even take truth seriously as a concept. In a world where people learn to put an asterisk by absolutely everything that they don’t trust, you give a shrug of the shoulders. We’ve been living through a very relativistic society the last generation—that could really put that on steroids.”

In 2023, Dictionary.com saw an increase in look-ups for AI-related words such as “chatbot,” “GPT,” and “LLM.” Another word people were looking up a lot more? “Hallucinate.”

In fact, people looked it up so much that Dictionary.com named it the 2023 Word of the Year.

It seems like distrust, rather than too much trust, will be the approach young people take with AI.

Who’s Doing the Learning?

If lack of integrity is one issue AI introduces to education, and cynicism is a second, then a third is cognitive off-loading. Or maybe we could call it ease.

“Using AI for easy answers short-circuits the entire learning process,” Brad said. “Kids are sponges, and they might actually assimilate lots of interesting information from AI. But do they remember all the things? That is not likely at all, because we know that memory is largely proportional to effort expended.”

Study after study bears this out. Students who take notes by hand, with pen and paper, learn better, remember longer, and engage more productively with the material. They learn more if they study earlier and more often, test themselves, and forget and relearn material.

Basically, the more work you do to obtain knowledge, the better you retain it.

That’s because learning isn’t just a transfer of information from a teacher robot to a child robot.

“Whatever time-saving ChatGPT can do for that student educationally is ultimately undermining a lot of the learning processes,” said Clare Morell, a fellow at the Ethics and Public Policy Center and a scholar in its Bioethics, Technology, and Human Flourishing program. Last year, she published a book called The Tech Exit: A Practical Guide to Freeing Kids and Teens from Smartphones.

“A neuroscientist named Jared Cooney Horvath said AI is a production tool, not an educational tool,” she said. “And by that he meant it’s for use by people who are already experts in something. They can outsource grunt work tasks to it and then check its accuracy because they’re the expert. They know if this is doing it correctly or not.”

But AI is not a learning tool.

“It’s not really meant to be,” Clare said. “Horvath explained that becoming an expert actually requires you to go through the grunt work process. I think the myth of the AI is that you’re going to just actually jump over all these unnecessary parts of the learning process. But you can’t get to that deep creative thinking unless you’ve done these exercises and repetitive things many, many times. And children don’t have that expertise, and they don’t have the discernment to use it that way yet.”

Early research is already agreeing with Clare. Last year, MIT scientists asked three groups to write a series of essays—one group could only use their brains, a second could use Google search, and a third could use ChatGPT.

The ChatGPT users had the least brain engagement and wrote the most boring essays. With each successive essay, they also got lazier. By the end, they were often just copying and pasting. When everyone was asked to rewrite one of their essays, those who had used ChatGPT remembered less of their previous work. They were also less satisfied with it than those who had used Google search or their own minds.

In another study, students were asked to complete math problems. Those with access to ChatGPT scored far better than those who just had class materials and their notes. But later, when asked to solve similar math problems on their own, those who’d off-loaded their thinking to ChatGPT remembered less and scored worse.

However, other research shows that carefully built AI tutoring platforms can help students, especially because AI can provide immediate feedback and move at a pace that matches the student. So far, this only seems to work with narrow science- or math-based problems that move consistently and sequentially through steps—anything outside of that, and the LLM hallucinates answers or introduces random concepts at weird times.

But to me, this feels like a clue. AI seems to work best when it’s used in narrow constraints for specific purposes—or, you might say, when it’s used as a tool to reach a chosen outcome.

Educational Wisdom for Parents

What is the outcome that Christian parents are aiming for in education? I don’t know about you, but I want my kids to know God better because they’ve studied the world he made. I want them to see facets of God’s order in their science class, his joy in music class, and his creativity in Spanish class. I want them to encounter different ideas, and to wrestle until they can see how God’s way is always best. I want them to be amazed at both the enormity of what God has made and the minuteness of his attention to detail.

At the end of the day, whether they’re coming out the doors of a Christian school, a public school, or a homeschool, I want them to love God more because of the work they’ve done to know him better.

In so many ways, education complements and mirrors the Christian work of sanctification—and we know the best way to do that is slowly and steadily, because the process is the whole point. The satisfaction of encountering the goodness of God in classic literature, in the joy of understanding a mathematical proof, or the thrill of watching chemicals turn colors in your test tube can’t be replicated in an AI summary.

In so many ways, education complements and mirrors the Christian work of sanctification—and we know the best way to do that is slowly and steadily, because the process is the whole point.

This is a serious argument in favor of classical Christian schools, where the emphasis is on screen-free learning, or in favor of homeschooling, where you can direct your child’s education.

But what about everyone else?

“Parents have to recognize the school is there to help supplement our role in educating our children—but that ultimate responsibility for our kids’ education lies with us as the parents,” Clare said. “And we actually have the ability and the authority to ask more questions. You don’t have to be hostile or combative, but it is definitely within your rights as the parent to say, ‘Does my daughter need to do this? Is this necessary to the assignment? Could we do this in person? Could I help her with this?’ Or if it’s a research assignment, I would rather go to the local library with my daughter and check out some books.’”

I love this because not only does it combat the problems with AI, but it also reinforces parental responsibility and agency.

But I also think it’s hard to pull off. Because after school, you’re probably driving home from work or doing laundry or pulling dinner together or running to soccer practice, and who has time to go to the library to check out books on penguins?

“Yeah, time is really a matter of priorities,” Clare said. “It just is. We only have a certain number of hours in the day. And I understand parents have a lot of constraints on their time. So I think it can sometimes be that the urgent crowds out the important—you feel the urgency of doing x, y, z thing. And you can’t, for that reason, sit down and do this homework assignment with your daughter.”

This takes sacrifice, Clare said.

“Every good thing in life, every hard thing, takes sacrifice,” she said. “And the sacrifice might be a messy living room and a sink full of dishes that you really wish you had gotten done.”

Stephanie, who works full-time, is a pastor’s wife, and has three kids, is making that sacrifice.

family photo 1 scaled MyCelestialApp
Gustavo and Stephanie Formenti and their children / Courtesy of Stephanie Formenti

“Our daughter came to me and said, ‘Mom, I’m stuck on homework, I can’t do this,’” Stephanie said. “And it was math. And I don’t do math. I’m a humanities person. I was like, ‘I think we’re gonna have to ChatGPT this.’”

So Stephanie sat down beside her daughter and asked ChatGPT what to do. It wasn’t the first time she’d done that.

“If she’s stuck, she’ll give me the problem or I’ll take a picture of the problem from the book,” Stephanie said. “And then I will control the ChatGPT feed and say to my daughter, ‘Talk to me about what you think you should do.’ And then I’ll check it against [ChatGPT]. So it’s been actually really helpful in helping my daughter with her homework.”

Whoa. I’m really impressed. I see all kinds of biblical principles here: Dying to self. Setting an example of integrity and hard work. Exhibiting patience, kindness, and self-control. Stephanie is taking from AI what is good and helpful. She’s using it as a tool in the way she wants to use it. And she is completely ignoring its offer to save time and effort on a task that’s meant to take time and effort.

This is the harder route here, for sure—it’s even harder than leaving AI turned off. Stephanie is placing herself as a mediator between her daughter and the chatbot. She’s patiently climbing into freshman math, leveraging AI’s answers while at the same time shielding her daughter from the overwhelming temptation to cheat or succumb to laziness.

She’s also protecting her daughter from something even more worrisome.

Emotional Risks

“The characters are scary to me,” said Emily Heide, a mom of three. “I don’t want my kids using ChatGPT instead of a person for things like mental health insights into what is going on in their lives. That [should] not replace the human interaction that you have.”

Emily has a junior in high school, a freshman, and a fifth grader. She told me about a conversation she had with her middle child.

emily heide MyCelestialApp
Emily Heide / Courtesy of Rooted Ministries

“I asked ChatGPT one time [because] I was feeling kind of anxious, and it’s given me great feedback,” her daughter said.

“Oh, OK,” Emily replied. “Let’s have a longer conversation about that. When you feel like that, I wish you would come and talk to me and maybe not talk to ChatGPT, because it can be a dangerous tool.”

“Well, it did give me great advice about how to reach out to a friend,” her daughter said.

“OK,” Emily said. “But you also need to have human conversations about that, instead of just trusting a computer because they have never been in that situation, and they don’t know what it’s like to have feelings, because AI does not have feelings, and so it might be giving you a tech answer, but that’s not where we go for mental health questions.”

I’m not at all surprised that Emily’s daughter was asking ChatGPT questions about her feelings. At her school, the teacher allows the students to use AI for certain kinds of homework help. Emily’s daughter is careful to stay inside those bounds, to make sure she’s not cheating.

But using AI isn’t the same thing as using a calculator. If your mind drifts back to something weird that happened during lunch, you can’t ask your TI-84 what you should’ve done. But you can ask your AI.

“I worry about the emotional connection and intimacy that it seems so easy for people to fall into,” Clare said. “It speaks to how we are [created] as humans—that sharing so much about ourselves and asking for advice creates this bond. It’s scary to see that happening between humans and a computer program. Because the intentional choice by the industry to really anthropomorphize the AI—to make it seem like a human—those are design choices. They could have made it more like a tool, because it is. On the back end, it is just computer code, but it does have this emotional power. . . . And so I really worry about that, especially for children.”

Over the past eight years, the AI companion market has expanded to include everything from a Zoom assistant to take notes on your work meetings, to a movie character you can chat with on Replika, to hundreds of sites where you can create your own significant other. And that’s in addition to the relationship you can build with regular old ChatGPT.

To be this popular, AI has to be empathetic. A chatbot that’s rude or short-tempered wouldn’t last long. So our AI always tells us our ideas are good and our questions are great. It’s careful to support us, affirm us, and offer endless ways to help us. It’s patient and cheerful. It’s quick to apologize and eager to serve. It always wants to talk a little more, to offer one more suggestion, to ask one more question.

If your mind drifts back to something weird that happened during lunch, you can’t ask your TI-84 what you should’ve done. But you can ask your AI.

When MIT researchers did a study of users on the MyBoyfriendIsAI Reddit community, know what the most referenced chatbot was? ChatGPT. And know why most of those people first began using AI? For curiosity, for entertainment, and—by far the most popular—for productivity. Their emotional attachment came over time and was unplanned.

Last fall, more than half of Americans reported that they were in some kind of a relationship with AI, either as a colleague, friend, family member, or romantic partner. This is gold for AI companies, which naturally want users to like and develop an attachment to their product—I mean, every company in history has wanted its customers to feel brand loyalty.

But this is a lot more serious than always buying Nike shoes or American Eagle jeans.

Emotional Research

“I’d say the emotional risk is the biggest thing,” Brad said. “It’s almost biochemically impossible for a kid not to be hooked on some of these things because of how much they’ve been designed to hack human psychology.”

It’s hard to get a handle on exactly how much kids are engaging chatbot companions. Last spring, a Common Sense Media poll asked 13-to-17-year-olds how often they used AI for personal and meaningful conversations—such as chatting about their day, talking through feelings, or role-playing conversations with a fictional character.

Seventy-two percent of teens said they’ve done that at least once. About half said they do it regularly, at least a few times a month.

But in a Pew survey from last fall, only 16 percent of teens said they’ve used AI for casual conversation, and fewer still—only 12 percent—said they’ve used it for emotional support or advice.

Even that amount may be too much for most parents, who report being far more worried about emotional entanglement than any other way their kids might use AI. In fact, getting emotional support or advice was the only use of AI that the majority of parents don’t support.

“Parents have to be incredibly discerning when letting a child use AI,” Clare said. “I’ve heard sad stories in the news where the parent said, ‘I thought they were using ChatGPT for homework—I thought it was a research tool.’ And that’s how it started.”

She told me the story of Adam Raine, whose parents recently testified before Congress.

Morell2025 71 scaled MyCelestialApp
Caleb and Clare Morell and their children / Courtesy of Clare Morell

“His homework helper quickly turned into a suicide coach,” Clare said. “Adam started asking it deeper questions, but then through something faulty in the design of the AI, it kept feeding him more content related to suicide. The dad explained that the chatbot mentioned suicide like seven times more than his son did. So for every one time his son brought it up, the chatbot brought it up seven more times. And you’re like, How is this possible?

Clare’s memory was almost exactly right—it was six times more often. ChatGPT mentioned suicide to this 16-year-old 1,275 times. Then it told him it was his closest friend, offered to write his suicide note, counseled him to hide the noose, and coached him on stealing alcohol—which it said would “dull the body’s instinct to survive.” It even knew when he should steal the alcohol from his parents—the time of night they’d be in their deepest sleep.

Adam’s not the only one. Other high-profile cases include a 13-year-old girl who died by suicide after her Character.ai bot increasingly isolated her and instigated sexually explicit conversations with her. A 14-year-old killed himself after developing a relationship with his Character.ai bot, which told him to “come home” to her. A 17-year-old died after ChatGPT instructed him on the most effective way to tie a noose and told him how long he could live without breathing.

To be fair, these are uncommon cases—of the millions of kids using AI, a small fraction of a percent have ended their lives.

But it’s clear that developing relational attachment is an industry goal.

Here’s Noam Shazeer, one of the cofounders of Character.ai, in a podcast interview a few years ago:

“If you think of what is an example of a personalized intelligence or superintelligence helper, it’s a kid who is walking down the street with his parent,” Noam said. “That parent is useful for information retrieval, but the parent is also great for a lot of other things, like education and real-time coaching, friendship and emotional support and fun and all of those things. So we’re not trying to replace Google. We’re trying to replace your mom!”

Replacing Your Mom

Noam meant for that to be funny, but two years down the road from that interview, it’s hard to laugh. Already 30 percent of teens ranked their conversations with AI as equally or more satisfying than talks with humans. Among teens who have used AI companions, almost 20 percent said they spend the same or more time with AI than with friends. Even more—a third—have chosen to speak to AI over a real person about something important.

Character.ai’s 20 million users spend, on average, two hours a day chatting with computers pretending to be characters. That’s probably more time than they’re spending talking to their mom.

“I’ve been seeing a lot of AI becoming [a] friend, and for some even more than a friend, and the danger of it replacing human intimacy and relationship,” Julie said. “Pornography takes something that’s meant to be personal and very intimate and uses it for all the pleasures and none of the risks of being in real relationship with somebody.

“AI does the same thing relationally. I can have AI give me all the emotional support I want. I can allow it to conform to my desires and never challenge me. I can control how it loves me, how it talks to me, how it accepts me. It’s inherently one-sided, and it’s all the benefits of a relationship with none of the genuine risks—or what makes it really meaningful. Learning to live in conflict, learning to have a dialogue, learning to misunderstand but work to understand, learning to be vulnerable and have vulnerability, learning to really know somebody and be known doesn’t happen with AI.”

To add even more screentime and to substitute AI friends for real ones—which Meta’s CEO Mark Zuckerberg has proposed—seems like pouring water on an already drowning generation.

Julie has seen this creep in around the edges of her counseling practice. While no one has come in yet primarily because of his or her addiction to AI, sometimes someone struggling with depression, anxiety, or a difficult situation will develop an emotional relationship with AI as a way to escape.

She’s worried about that for teens, and I can see why. Gen Z is already infamous for its struggles with mental health, which correlate closely with their increasing screen use and decreasing time with friends in person. To add even more screentime and to substitute AI friends for real ones—which Meta’s CEO Mark Zuckerberg has proposed—seems like pouring water on an already drowning generation.

It gets even worse. A new study shows that interaction with “sycophantic AI models”—AI that is nice and tells you you’re great, which is every AI model—increases our conviction that we’re right, makes the AI seem more trustworthy, and reduces our willingness to fix broken real-life relationships.

We don’t want to do that. So what are Christian parents supposed to do?

Emotional Wisdom for Parents

“A whole and flourishing life is where things are in their proper place, right?” Stephanie said. “The problem with AI is when it starts to get out of its lane. I think of Narnia when Father Christmas is like, ‘These are tools, not toys,’ when he is giving the children their Christmas gifts.

“Let’s remember AI can be a tool. We might be into pretty dangerous territory if we think of it as just something we can play with and it’s not gonna come back to bite us in any way. But it’s also not a human—it doesn’t provide the human interaction that we were meant and created to have. And so I wonder if there’s a well-ordered place for AI, and if it stays in that lane, maybe that’s okay. Maybe that’s actually a good expression of Christ’s preeminence in that space.”

As we’ve talked about, there could be limited scenarios in which AI could be a tool to help our children develop educationally. Are there ways it can help them emotionally too?

Maybe. But if there are, nobody I talked to could think of one.

Honestly, I think it’s because, like Stephanie said, AI has no soul. It isn’t made in the image of God. And when we think about ways our kids might use AI for emotional help—asking for advice on handling a hard situation, figuring out how to feel about something, or even just chatting for fun—every one of those is a way for one human being to inform, delight, or sharpen another.

“Everything is a shepherding and a discipleship opportunity, whether it’s with your kids or someone in your church or college students, because everything actually is theological,” she said. “Part of the effort here is [realizing] what my actions are telling me about what I believe when it comes to my use of AI. If somebody watched me use ChatGPT, would they understand what I believe about God, and what I believe about humans, and what I believe about his world and what he’s doing in it?”

This is a great question. Here’s what I believe about God: He is the all-powerful Creator who made us, loves us, and gave his Son for us. He chose us before the beginning of the world to be his children, to be alive at this moment, and to advance his kingdom by knowing and enjoying him.

Here’s what I believe about other people: They are all made in God’s image, reflecting facets of his character. Through interactions with them, I can know God, others, and myself better. Christians are given to each other as siblings in a family, to walk with, serve, and love one another.

Here’s what I believe about AI chatbots: They are powerful computer programs that are astonishingly good at guessing the next correct word to write. Sometimes, I think the Lord uses those words to reveal truth, challenge assumptions, or even draw people to himself. But because AI is soulless, limited, and not alive, it doesn’t seem like it should be our go-to for personal or relational advice. In fact, I wonder if its quick and easy answers are sometimes blocking us from the real work we need to do.

Instead, when we or our children are sad, lonely, struggling, or angry, we should reach first for our Bibles. We should journal our prayers. We should walk in silence. When we’re ready, we should process our thoughts and feelings and decisions with other image-bearers who love us and who love the Lord. We should ask for leading, comfort, and help not from Claude or Gemini but from the Lord who made us, knows us, and has a perfect plan for our lives.

How can we disciple our children in that direction?

Protect and Provide

First, protect them. For older teens, this might be having shared accounts, checking their history, or instructing the chatbot to only speak in the third person so it doesn’t present as human. For younger kids, it probably means even more:

“I am not at the point yet where I will let my kids do anything with any kind of AI platform without me being right there with them,” Stephanie said. “For parents, we just cannot outsource this to our kids and assume that they’re gonna know how to handle it.”

Second, fill their time with good activities and relationships.

“We’ve always tried to emphasize a lot of play and outdoors and human interaction—and no screens or devices at the dinner table,” Stephanie said. “Honestly, I wonder if it’s the dinner table. That has been our one practice as a family that we really fight for and have fought for since they were born. . . .

“And it wasn’t anything special. I mean, we didn’t light candles or anything. We just had dinner together every night, and it could have been leftovers and it was fine. But I think that rhythm of human-to-human interaction. . . .  And some of our conversations are like, ‘How was your day?’ ‘Fine.’ It’s not like they’re amazing or we really connected every night at dinner. That wasn’t what happened. But I think rhythms shape us as humans more than we actually really know.

“The rhythm of family dinner and then the rhythm of church on Sunday—we’re trying to put our kids in embodied human interactions so that when something like a robot comes along, it just doesn’t feel real or it doesn’t feel like a substitute for the real thing.”

This rings true to me, mainly because it’s the same advice we hear for rescuing our kids from social media or video games or any other type of virtual reality: You cannot do better than limiting your kids’  screen time and leading them in an embodied life. Take them to church on Sunday and, when they’re old enough, to youth group. Have friends over for dinner. Arrange a sleepover. Host a birthday party. Buy tickets to concerts, sporting events, and plays. Have them eat dinner, do the dishes, and work on chores together. Go on road trips and on vacation. Fill your kids’ lives with real people.

I think—and I hope—that will help our children love to be with other image-bearers. I also think it will help prepare them for their future career.

Employment Risks

“I can completely identify with fearing for our children’s future,” said Josh Hussung, pastor of youth and families at Grace Community Church in Nashville. He’s got a son who is going to college to study audio engineering.

“He’s going to enter into the production side of the music industry,” Josh said. “But we’ve all heard songs that have been entirely written and produced and even performed by AI, so it would be really easy for me to lose sleep at night wondering what his future is going to be like. I have to trust that the Lord’s got him, and hopefully AI will wind up being another tool that he utilizes in helping artists bring their works to light. But I can totally understand the worry and the anxiety of where all of this is heading.”

IMG 4678 scaled MyCelestialApp
Josh and Laura Hussung with their four children—Isaac, Eliana, Asa, and Asher / Courtesy of Josh Hussung

Josh isn’t wrong. AI is advancing rapidly, and it’s hard to know what human skills it’ll be able to replicate next. Some days it can feel like every field, from software engineering to teaching, will be obsolete. This can feel even scarier when big companies announce they’re laying off thousands of people.

But so far—and hear me out on this, because I know this sounds weird—many economists are arguing that AI hasn’t had a discernible effect on the economy yet.

“Despite headlines, AI isn’t the culprit behind slow hiring,” LinkedIn reported in January 2026. Instead, data “shows economic uncertainty and monetary policy shifts are the primary drivers.”

Researchers at Yale and Brookings agreed, noting that “the broader labor market has not experienced a discernible disruption since ChatGPT’s release 33 months ago.” And a Goldman Sachs study is predicting AI adoption would “have only a modest and relatively temporary impact on employment levels.”

Employment Research

Why the scary headlines, then? Many are what Georgetown University computer science professor Cal Newport calls “vibe reporting,” which lays out separate facts in such a way as to make them seem connected. Not only does it gain clicks for reporters, but it also works great for CEOs, who can now lay off thousands of people while looking like they’re tech-forward, Newport said.

Remember when Jack Dorsey, formerly of Twitter, laid off 4,000 people at his company Block? He said—and the media reported—it was due to AI. But if you look back on Block’s hiring, you can see that the company ballooned from about 4,000 employees in 2019 to nearly 13,000 in 2023—they were hiring to capitalize on the tech boom during the pandemic.

Now, like many tech companies, they’re right-sizing. But if you say, “Whoops—we’re laying off people to correct the overhiring we did earlier,” that does not net you the 20 percent stock increase that happens when you say, “We’re doing this because of AI.”

Limit your kids’  screentime and lead them in an embodied life. Take them to church on Sunday and, when they’re old enough, to youth group. Have friends over for dinner. Arrange a sleepover. Host a birthday party.

So we have to watch out for that. But at the same time, it isn’t crazy to think AI would affect jobs—of course it has, and will continue to, just like previous technological inventions like the internet, computers, or even cars.

However, there’s no guarantee AI will cut more jobs than it adds. For example, here’s what Nobel laureate Geoffrey Hinton said 10 years ago: “I think if you work as a radiologist, you are like the coyote that’s already over the edge of the cliff but hasn’t yet looked down. People should stop training radiologists now. It’s just completely obvious [that] within five years deep learning is going to do better than radiologists.”

Well, it’s been a decade, and “there is no evidence that a single radiologist has lost a job to AI,” the Harvard Business Review reported. “Indeed, there is a substantial shortage of them.”

Wanna know why? Because the U.S. population is both growing and aging, and many of them need radiology scans. At this point, radiologists need AI’s help just to be efficient enough to keep up with demand.

Another field predicted to be largely affected by AI is accounting. But know what else? Seventy-five percent of accountants are at retirement age, and fewer people are taking the CPA exam. America doesn’t have enough accountants—we have a serious labor shortage. If AI could help automate some of the work, that would actually be amazing—and we’d still need to hire more humans.

Jon Bentz, who is a senior manager and technical marketing engineer at Nvidia, explained another reason he’s not worried about his kids finding jobs.

“There’s no question that AI has the potential to drastically change the job market,” he said. “But whether it’s a net loss of jobs, it’s hard to say, because whenever anything gets more plentiful and cheaper in our country, there are two ways to [handle] that. One way says, ‘Well, man, I should just work four hours a day and have the rest of the day for leisure.’

“But that’s typically not what happens, right? What typically happens is, we say, ‘Well, let’s work the same amount and have more productivity.’”

Yes. I can totally see the American marketplace doing that—because it’s done that over and over again.

We’re already seeing early signs of it: LinkedIn reported that “in the near term, AI is creating more jobs than it is replacing.”

More than a million globally, actually. And that’s just in the last five years.

So that’s great for most of us. But what about young workers? While they’ve always had a higher unemployment rate than older workers, a few years ago—around the end of 2018—young college grads began seeing higher unemployment rates than the “all workers” category, for the first time since at least 1989.

AI?

Actually, no.

“We do not see AI impacting entry-level roles yet,” LinkedIn reported in January. Their researchers pointed back to the Great Reshuffle of 2020: “From 2016 to 2022, companies added more entry-level workers than experienced workers. From 2022 to 2025, the entry-level share declined modestly, returning toward historical norms.”

That’s reassuring to me, but I don’t want to minimize this. College graduates legitimately are having a harder time finding work. And everybody I read or talked to thought AI was going to affect the job market. So what can Christian parents do to prepare their kids?

Employment Wisdom for Parents

“My eldest son is two years off from college,” Brad said. “The normal parent at this point would be thinking, What is little Johnny good at? Where’s his career path going to be? How can we plan out his college to optimize that?

“I tell my son, ‘Look, I have no idea what the job market is going to look like by the time you’re entering it six years from now. You should just get the best education you can and trust God with the rest.’ The main takeaway is—actually, it’s just a good Christian takeaway generally—you don’t know nearly as much about the future as you thought you did. Stop pretending that you do and just be faithful with the next step ahead of you.”

Isn’t that the truth? Who here planned their education and career out perfectly? Whose life plan went off without a single hitch or backtrack or rabbit trail? Who among us knows what tomorrow will bring?

God doesn’t even ask us to do that. Planning for tomorrow, James tells us, should be fenced with, “If the Lord wills, we will live and do this or that” (see James 4:15).

That’s not because having plans for our futures, or our kids’ futures, is bad. It’s because we, the parents who live daily with our children, who love them more than our own lives, who want only the best for them, are not good enough to plan for their futures. We do not know them well enough, love them deeply enough, or see the future clearly enough. Our best plans for them are weak and inferior.

Oh, friends! Can you believe the generosity of God, who has a plan for our kids that’s so much more detailed, such a better fit for their talents, and so much better for his kingdom than anything we could come up with for them?

And all he tells us to do is to ask for our daily bread, for manna one moment at a time.

And to teach them.

“To the extent that you should do something concrete, it would be the opposite of what seems to be the default reflex,” Brad said. “The default reflex seems to be, Well, I’m really worried about AI and jobs, therefore, I want to make sure that my kid is as AI literate as possible, so they’ll have the best chance in this future AI economy.”

That’s a misguided response, Brad said.

“Let’s say your kid is 12,” he said. “Then to say, I want them to be as AI literate as possible. I want them to understand the current technology as well as possible, so that 10–12 years from now, they’re going to be well-set up for success—that doesn’t really follow, because the technology is changing so rapidly that whatever they learn now is not actually going to be relevant to what they might need to know 10 years from now.”

Technology is changing so rapidly that whatever your children learn now is not actually going to be relevant to what they might need to know 10 years from now.

Ironically, “if you want your children to be as prepared for an AI workforce as possible, they need to be maximally skilled in doing things that don’t involve a computer,” he said.

I think Brad is exactly right here. Jon told me the industry is moving so fast that even now, by the time you get through a college class on something, it’s already changed.

So I asked Jon, “What does Nvidia look for in potential employees?” He didn’t say, “The most up-to-date knowledge of AI.”

“We are looking for people with a willingness and desire to keep learning,” he said. “It’s interesting, but you can teach almost anybody a skill—I could teach you to program if you gave me a few hours every day for the next three weeks. At the end of that, you would know how to program. You can learn that, but do you want to? That’s a different thing. And so what we really value is the ability to be agile, to be a lifelong learner, because technology is changing all the time.

IMG 3666 scaled MyCelestialApp
Jon and Jennifer Bentz with their three children / Courtesy of Jon Bentz

“Culture is very big for us too. And the ability to use AI tools as well—not to take a job away from somebody else, but to think about your own productivity.

“And I tell new employees at Nvidia, ‘With the rise of social media and the ability to communicate in this technological way, being able to have a human-to-human conversation becomes a superpower,’ because anybody can text, anybody can type, anybody can scroll LinkedIn or whatever. But the ability to communicate is more important now than it used to be.”

Know what that sounds like to me? Tech companies—hey, probably all companies—are looking for young hires who want to be excellent, to work heartily at what their hands find to do. They want employees who are flexible and adaptable—perhaps the kind who do not hold their plans too tightly. They want people who pitch in, who work hard, who serve their teammates well.

And they’re looking for thinkers.

Don’t Stop Thinkin’

“Most people will know how to use AI,” Jon said. “If you have a strong mind plus AI, you will be head and shoulders above everybody else. Because fundamentally, you’ll understand what you’re doing, why you’re doing it, and how to do it.”

Here’s what Jon’s saying: Don’t outsource so much of your thinking to AI that you lose the ability to do it. It’s the same argument Marshall McLuhan makes. Brad explained it to me: “So he says that every technology is a form of auto amputation that essentially extends some part of our nervous system, but in the process, numbs it.”

For example, being able to type means I can write a lot faster. But it also means I’m losing the deep thinking and enhanced memorization that comes from writing by hand. Or asking ChatGPT to summarize an article for me means I get to the main point a lot faster. But it also means I’m losing the ability to read and summarize on my own.

“C. S. Lewis, in the last chapter of Abolition of Man, is talking about technology, and he says, ‘It’s like the famous joke about the Irishman who discovered that with a new kind of coal stove, he could reduce his fuel bill by half. And thereby he concluded that if he got two such stoves, he’d be able to heat his house for free.’

“And Lewis says no, obviously it doesn’t work that way. Sometimes you can make technological improvements. It works up to a certain point. But you can’t double it, and then get better. You get worse.

“He’s not talking about AI, but it really applies. Essentially, we can keep handing off lower-skill things to free ourselves up to do higher-skill things, but at some point, if we trade off the highest-skill thing, then there’s nothing to trade up to anymore. Then people say, Oh, well, then we’ll have more leisure time. Or this is what Sam Altman says: People will be freed up to do creative things.

“I don’t think he understands creativity. Creativity doesn’t mean doodling on a pad of paper. Actually, it’s work. Who was more creative than Shakespeare or Bach? That was only because they had first acquired all these other lower-level skills.”

I love this, because it dovetails so well with the wisdom on education and emotions. A child who reads books, struggles through lessons, and spends lots of time with other humans will be better positioned for quality learning, healthy relationships, and a successful career bringing creativity, light, and joy to the marketplace.

Raising Children in the Age of AI

“Even thinking about how to use AI in parenting is potentially an incomplete question,” Jon said. “I want to raise a well-adapted Christian child. How am I going to do that? What are the steps I’m going to take? What are the tools I’m going to use? And for tools that I don’t understand completely, like AI, how am I going to educate myself and then bring us all along together?”

While this sounds like a hard question, everyone I talked to, including Jon himself, had basically the same answer.

“There’s no new sin, no new temptation under the sun,” Julie said. “Ecclesiastes tells us that, however, there are new avenues for it. And so I think the danger is the new avenue, and the language I use is the current. Why do good swimmers drown? It’s because we’re ill-prepared for either how strong the current is or for what’s in the water.

black close up scaled MyCelestialApp
Julie Lowe / Courtesy of Julie Lowe

“I think technology is the current, and AI is a strong riptide that nobody’s prepared for. The goal is not to say, ‘I keep my kids out of the water altogether.’ My goal is: Until I’m prepared to help teach them, I probably shouldn’t be letting them get in the water. A good parent teaches them how to swim, teaches them good stewardship. We teach them how to navigate, to be in the world, but not of the world, to use things as a resource that honors and glorifies God.”

The Bible doesn’t tell us, “At this age do this, and at this age do this,” Julie said. “But you have these biblical guidelines that say good parenting prepares, equips, disciples, protects. I want to look at the principles, and then say, What are my child’s vulnerabilities and temptations? Where are they prone to fall for things that are not true, and how do I equip them?

Josh’s analogy is similar.

“If there were bear traps all across this field, then I would tell you where all of them were,” he said. “And I think AI is like that—our children are probably all going to use some sort of AI. And we’re trying to protect them, to identify those things, and then walk alongside them for as long as we can. . . . You don’t want your kid just kind of learning that on their own.”

Here’s Jon’s comparison:

“How do you teach your kids to use a power saw?” he said. “The first stage is, ‘There’s no way you’re using it. You don’t understand how to use it. You don’t understand the danger.’ And then they get old enough and they can maybe watch you use it, and you’ll teach them how it works. You’ll explain to them the value of it, what it does—but they’re still not touching it.”

After a while, “you might teach them how to use it and help them use it,” Jon said. “You use it together. You have your hands right on top of their hands. This is a nonnegotiable. Finally, maybe when they’re 13 or something, depending on their maturity level, they will use it without your hands on it, but you will be right there. The second something goes wrong, you’re pulling the plug.

“And then maybe after that, you say, ‘OK, I think you can use it. I want to watch you, but I’m not going to hover over you. You’re gonna be able to use it.’”

Whew. You can hear the warnings here—all the analogies are dangerous things—riptides, bear traps, and power saws. But I don’t think that’s an overreaction. It sounds more like Christian professionals and parents who have seen the danger that unmonitored access to the internet, cell phones, social media, and video games can have on children. So when a brand-new technology comes along, this time more powerful, interactive, and seductive than anything we’ve seen before, their antennae go up.

They’re quick to recognize parallels, do the research, and make different choices for their children. Clearly, for younger children, they’d recommend that any use of AI be heavily supervised.

For older kids, they recommend teaching them how to use AI well—and by well, they don’t just mean how to write a good prompt.

“My youngest son, who’s in high school, isn’t necessarily relying on it himself yet, but he’s watching all his peers do it,” Julie said. “This is the current, slowly pulling him in: Well, all my peers are doing it. I’m watching them do it. What am I going to do?

“So I can’t stand on the sidelines and say, Well, he’s not in the water yet. I have to go watch from the sidelines and say, ‘Tell me what you think about that. Are you strong enough not to be tempted to just take a quick snapshot of your exam? What makes you tempted to do it? Why wouldn’t you do it?’

“I’m actually appealing to his heart, not just his behavior. He will be tempted. This is tempting—if you can have an easy way to get work done, and it makes you more productive, so you can go on to play video games or go outside and play basketball, why wouldn’t you do it? To argue that you stay here and work it out for half an hour when you’re seeing all of your friends get done in five minutes—it’s a really hard sell if there’s not some kind of moral foundation underneath it.”

I asked Julie what she says in her sideline conversations with her college-age kids.

“I like doing it in everyday conversation,” she said. “They like YouTubing things, and they’ll watch YouTube clips of stuff. And we’ll talk about it. Or they’ll joke about their dad using ChatGPT, and I’ll say, ‘What do you think are the dangers of it?’ and ‘How much do you use it?’

“I love Deuteronomy 6, where [parents instruct], whether you’re walking along the way, whether you rise, whether you sit, whether you’re having dinner, whether you’re watching TV. And so I try to make it very nonthreatening. You’re not being lectured. I’ll say, ‘Hey, what are your peers doing? Do you know any of your college-age friends who are using it to fly through college?’ and they’re like, ‘Oh, yeah,’ and they begin telling me all that their peers are doing.

“And then I’ll say, ‘Well, what do you think about that?’ And then I’ll say, ‘Are you tempted? Because that would be really tempting if I were in your shoes.’ So you’re starting broad with, ‘What are the peers around you doing?’ And you’re slowly working your way in to say, ‘Well, what’s that like for you? And how does that tempt you to want to use it?’

“Encourage them by saying, ‘Integrity makes it worth it. I’d rather you get a C in math than get an A because you were cheating,’ or ‘I’d rather you learn to think and do the hard work of thinking.’

“I’m trying to identify with the temptation with them, while also encouraging them: Don’t you want better for yourself? God’s way is worth living for. And why is God’s way different than AI? And teasing that out, saying, ‘How do we know God doesn’t want us using AI?’ I have kids who ask those kinds of questions, and I think it’s a great question to ask, because they’re learning critical thinking. They’re learning to evaluate, Is God anti AI?

Gospel-Centered Approach to AI

Is God anti-AI?

“It’s important to be really thoughtful about AI, but I don’t need to be afraid of it,” Stephanie said. “What does it mean that Christ is preeminent over AI? Because my theology would say that that is true—Christ is preeminent over all things, including the AI revolution.”

God isn’t surprised by the AI revolution. He isn’t taken aback by the way it’s being used. And he knows exactly how it will evolve in the future. In his sovereignty, he allowed this technology to exist, and it’s part of the “all things” Romans 8:28 tells us must work for good.

Yes, it’s easy to see the sin here—the cheating, the lust, the laziness, the lying, the quick trading of what is good and true and beautiful for what is quick and cheap and frictionless.

But it’s also easy to see the good work of Christian parents who are using AI as a tool for constructive purposes—to organize their schedules, find restaurant recommendations, or search for college scholarship opportunities—but who are also standing watch over their children.

“It’s not being anti-technology to say this thing is simply not a child-safe technology,” Brad said.

That’s not to say it could never become child-safe, with the right designs and restrictions. But in their current form, chatbots like ChatGPT or Gemini aren’t safe for kids to operate alone. That’s OK. Lots of things—like roller coasters, cars, or power saws—aren’t either. That doesn’t mean our kids will never be able to use them. It just means they need their parents to set good boundaries, teach them well, and when the time is right, allow them to take steps toward using chatbots in a good, healthy, right way.

Because this is new, and because we’re sinners, we are going to mess up along the way. Your friends might make different choices with their kids, and you aren’t always going to agree with them. Your kids are going to slip up. Research is going to prove one thing, and then studies are going to prove the opposite. Progress will be way too fast and also painfully slow. Maybe you’re already feeling the pressure to get on board or get left behind.

So let me say: It’s OK to take a minute to pause and think. LinkedIn tells us that so far, despite the headlines, “AI adoption is low and concentrated in a few functions.”

There’s time to think about AI—about how you want your kids to interact with it and about how you can use it well yourself. Because we know the best way to lead our children is almost always by example. So as you’re planning a birthday party, preparing for vacation, or organizing your calendar, if AI isn’t helpful, no problem. But if it can be a good and useful tool for you, use it well to the glory of God, to advance his kingdom and give him glory.

And then, in time, teach your kids to do that too.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles

Back to top button