Artificial Intelligence
Artificial Intelligence

NITDA’s Warning and the Bigger Truth About AI Security

By Shuaib S. Agaka

When Nigeria’s National Information Technology Development Agency (NITDA) recently warned about security vulnerabilities in OpenAI’s GPT-4 and GPT-5 models, the technical details were unsettling. Hidden malicious instructions embedded in ordinary web content. Safety filters bypassed through formatting tricks. Memory poisoning capable of subtly altering a model’s behaviour over time, potentially leading to data leaks or unauthorised actions.

At first glance, it sounded like a product-specific flaw. A weakness in a particular company’s system. But a closer look reveals a more uncomfortable truth: this is not unique to OpenAI, and it is not new.

What NITDA described reflects a broader pattern that has followed nearly every major artificial intelligence model released in recent years. From Google’s Gemini to Anthropic’s Claude, from Meta’s LLaMA to widely deployed open-source models, similar vulnerabilities have surfaced repeatedly. The technical labels may differ, but the core issue remains the same.

AI systems are powerful, widely adopted, and increasingly trusted. They are also, by design, difficult to fully secure.

Among the vulnerabilities highlighted by NITDA is prompt injection—a technique in which malicious instructions are hidden within seemingly harmless content such as emails, shortened links, documents, or social media posts. When an AI model processes such material during tasks like summarisation or browsing, it may unknowingly follow instructions it was never meant to execute.

OpenAI has stated that it has addressed certain weaknesses, and that is likely true. But NITDA’s caution included an important reality: even patched systems can be vulnerable to cleverly disguised instructions.

This is where the story expands beyond GPT-4 or GPT-5.

Large language models share a common architecture. They are designed to understand context, follow instructions, and generate useful responses based on patterns learned from vast volumes of text. Those strengths are also their structural vulnerabilities.

At their core, these systems are probabilistic pattern recognisers. They do not interpret intent as humans do. They prioritise instructions based on context, probability, and learned hierarchies. When conflicting instructions appear—such as system commands versus hidden prompts inside user-provided content—the model must infer which to obey. That inference process can be manipulated.

These are not obscure edge cases. They are natural consequences of systems that treat language as both data and executable instruction.

Each time vulnerabilities are exposed, companies respond with patches, safety layers, and architectural improvements. Those efforts reduce risk. But they do not eliminate it.

AI security has become a continuous cat-and-mouse cycle. Expanded context windows increase exposure to hidden instructions. Memory features designed to personalise user experience create new attack surfaces. Greater integration into workflows amplifies consequences when things go wrong.

This is not necessarily negligence. It is the reality of rapidly evolving systems deployed at global scale.

The idea of a permanently secure, fully sealed language model is unrealistic under current technological constraints. What exists instead is continuous risk management.

This matters because AI has moved far beyond experimentation. Language models now assist journalists, lawyers, developers, civil servants, researchers, and businesses. They draft reports, analyse documents, summarise policies, and influence decisions. In many settings, their outputs are trusted—sometimes unquestioningly.

That trust is where risk multiplies.

A manipulated output is no longer just a factual error. It can shape institutional decisions, expose sensitive data, or subtly distort analysis. The more seamlessly AI integrates into daily workflows, the more invisible its vulnerabilities become.

In Nigeria, adoption has been enthusiastic and fast. Professionals use AI to bridge resource gaps and remain competitive. Small businesses rely on it for customer support, marketing, and analysis. Yet AI literacy has not grown at the same pace. Many users treat outputs as authoritative rather than probabilistic.

NITDA’s warning, therefore, should not be read as alarmism. It is a necessary reminder.

AI is not an oracle. It is an assistant. Its outputs require verification. Sensitive information should not be fed into public systems without safeguards. Institutional adoption must be accompanied by training, oversight, and clear usage policies.

Most importantly, expectations must adjust.

AI safety is not a one-time problem to be solved. It is an ongoing balancing act between capability and control. As models become more useful, they also become more complex to secure.

The real danger is not that vulnerabilities exist. It is that users forget they do.

NITDA’s advisory joins a global chorus emphasising responsible adoption. Artificial intelligence is here to stay. Its benefits are undeniable. But vigilance remains the price of innovation.

Until that balance is fully understood, the warnings will continue—not because progress has failed, but because progress demands caution.

Shuaib S. Agaka is a tech journalist and digital policy analyst based in Kano.