NITDA Sounds Alarm on ChatGPT Security: Urges Nigerians to Safeguard Data

The National Information Technology Development Agency (NITDA) has issued a stern warning to Nigerians about potential security vulnerabilities in OpenAI’s ChatGPT models, specifically GPT-4.0 and GPT-5, advising users to exercise caution to protect personal and organizational data.

In a statement from NITDA’s Director of Corporate Affairs and External Relations, Mrs. Hadiza Umar, the agency revealed that it has identified seven critical weaknesses in the AI models. These flaws, according to NITDA, could allow malicious actors to manipulate ChatGPT through techniques known as indirect prompt injection, which involves embedding hidden instructions in web pages, online comments, or specially crafted URLs.

“These vulnerabilities could cause ChatGPT to execute commands outside of user intent during browsing, summarization, or other AI-assisted activities,” Mrs. Umar explained.

She further warned that some vulnerabilities may allow attackers to bypass safety filters or hide malicious content via markdown rendering exploits. Additionally, there is the risk of memory poisoning, where injected instructions persist and influence the AI’s responses over time, potentially affecting subsequent interactions.

Risks Highlighted by NITDA

The agency emphasized that the weaknesses could lead to:

  • Unauthorized AI actions without user consent.
  • Leakage of sensitive or confidential information.
  • Manipulation of outputs, which could mislead users.
  • Long-term behavioral influence on AI responses due to persistent malicious instructions.

According to NITDA, even with updates from OpenAI, AI models continue to face challenges in distinguishing legitimate user input from maliciously embedded content.

Advisory to Users and Organizations

To mitigate the risks, NITDA issued practical recommendations:

  • Limit or disable browsing and summarization features when interacting with untrusted websites.
  • Enable advanced functionalities like memory and browsing only when absolutely necessary.
  • Regularly update AI models to incorporate security patches and fixes.

Mrs. Umar stressed that following these guidelines is essential to prevent data breaches and maintain trust in AI technologies across Nigerian institutions.

Global Context

The alert coincides with global concerns over AI security. While OpenAI maintains that most regular ChatGPT users remain unaffected, incidents elsewhere highlight the emerging threat posed by prompt injection and memory manipulation techniques. Experts note that these attacks are difficult to detect and could influence AI behavior over time if unchecked.

Leave a Reply

Your email address will not be published. Required fields are marked *