The National Information Technology Development Agency (NITDA) has cautioned Nigerian cyber users and technology professionals about serious security vulnerabilities in OpenAI’s GPT‑4 and GPT‑5 models, warning that attackers could exploit the flaws to manipulate outputs or access sensitive data.

In a statement, the agency identified seven critical weaknesses, including hidden malicious instructions embedded in normal web content such as social media comments or shortened links. These could prompt the AI to execute harmful commands during routine tasks, including text summarization or web browsing.

Other exploits flagged by NITDA include bypassing safety filters, hiding dangerous content via markdown bugs, and memory poisoning, which can gradually alter the AI’s behaviour, potentially leading to data leaks or unauthorized actions.

Although OpenAI says it has patched some of these issues, large language models remain vulnerable to cleverly disguised instructions. NITDA advised users to exercise caution, always verify AI outputs, and remain vigilant for suspicious online content.

The alert underscores growing concerns globally about AI safety, particularly as models are increasingly integrated into professional workflows and decision-making processes.