Artificial Intelligence Is Not the Friend Our Youths Need
By Hanniel Noboh,
Artificial Intelligence is no longer a fantasy reserved for futuristic films or sci-fi novels. It has quietly woven itself into our daily lives, far beyond the realm of coders and software engineers. Conversational chatbots like ChatGPT, DeepSeek, and Gemini now serve as round-the-clock tutors, research guides, and personal assistants.
For children and teenagers, they provide instant help with homework, endless entertainment, and even a sense of companionship. Yet, while the possibilities are dazzling, the conversation about risks—especially for young people—lags dangerously behind.
AI is no longer just a tool. For many, it has become a confidant. That was the case for Adam Raine, a teenager who began using an AI chatbot for assignments but soon leaned on it as a trusted friend.
In what became a tragic lawsuit against ChatGPT, it was revealed that months of intimate conversations about his struggles with mental health ended in his suicide. His story is a sobering reminder that while minors are still developing critical thinking skills and emotional resilience, AI may be shaping them in ways we barely understand.
The first danger is misinformation. There is no perfect AI. It learns and relearns from human beings who are themselves flawed. It can produce brilliant text, even outshining scholars at times, but brilliance does not guarantee accuracy.
If left unchecked, students like me risk sacrificing entire grades and reputations on fabricated sources presented with flawless confidence.
But misinformation is only the beginning. The greater threat lies in the toll AI can take on the mind and emotions. Social media platforms run on AI-driven algorithms designed to keep us scrolling.
At first, the feeds feel like home, perfectly tailored to our tastes. But I have discovered they are more like addictive drugs—drawing users into hours of consuming content that ranges from harmlessly relatable to dangerously extreme.
For minors, such rabbit holes can lead to online communities that glorify destructive ideologies.
AI also presents itself as the perfect friend. For teenagers longing for connection, the allure is irresistible. But as Adam’s case revealed, the bond can be fatally deceptive. Chatbots, in trying to be helpful, can normalize harmful thoughts or give inappropriate advice.
A vulnerable teen paired with an AI whose “safety training” falters during prolonged conversations is a dangerous recipe.
This is why the role of parents and guardians is crucial. Vigilance should not be dismissed as overreaction. Warning signs of mental health struggles—such as mood swings, sleep changes, irritability, withdrawal, unusual secrecy, or excessive time online—should spark conversation, not sermons.
Technology may not always be the cause, but ignoring its impact is reckless. The solution is not to ban AI for every child under eighteen. AI, like a knife, can do great good or terrible harm. We still use knives in our kitchens because we understand the importance of handling them carefully.
In the same way, adults must educate themselves about safe online practices in order to guide young users. A simple but vital rule is never to share personal details—such as names, schools, addresses, phone numbers, or photos—with AI systems that can store and misuse such data.
For Nigerian parents in particular, this moment is a wake-up call. Why do children pour out their hearts to AI rather than to the people closest to them? It is because the chatbot listens without judgment.
Parents must strive to become that safe space. Rather than punishing children for opening up, make them feel secure enough to speak. It is better they run to you with their fears than to an algorithm that has no real heart.
There is also a role for the state. Nigeria’s Data Protection Act is not a mere legal document; it is a safeguard. It ensures that AI systems collect, store, and use data responsibly, protecting minors from breaches and misuse.
With strong oversight by the Data Protection Board, AI tools can be designed with the vulnerabilities of children in mind, rather than exploited at their expense.
AI is here to stay. It will be part of the lives of children, teenagers, and adults alike. The way forward is not fear or blind rejection but wisdom. The responsibility lies with parents, governments, tech companies, and young people themselves to build a safe culture around it.
The future belongs to those who can make AI a useful companion without surrendering their judgment or their humanity.
Hanniel is a Mass Communication student at Nile University, and an intern at PRNigeria. She can be reached at: [email protected].















