South Africa’s first draft National Artificial Intelligence Policy was supposed to signal that the country was ready to lead the continent in the age of AI.
Instead, it has become a warning about what happens when the state tries to regulate a technology it has not yet shown it can use responsibly.
Minister of Communications and Digital Technologies Solly Malatsi has withdrawn the draft policy after it emerged that the document contained fictitious and unverifiable sources, some of which appeared to be AI-generated. News24 reported that the draft was withdrawn after revelations that its reference list included sources that appeared to have been fabricated by artificial intelligence.
The draft had been approved by Cabinet on 25 March 2026, with a further special Cabinet sitting on 1 April, before being published in the Government Gazette on 10 April for public comment. The public consultation period was meant to run until 10 June 2026.
That process has now been stopped. And the Department of Communications and Digital Technologies (DCDT) has placed two unnamed officials on precautionary suspension with immediate effect, pending the ongoing investigation into the Draft National Artificial Intelligence (AI) Policy.
In a separate instance, the Department of Home Affairs (DHA) is suspending two senior officials with immediate effect following the detection of apparent Artificial Intelligence (AI) “hallucinations” cited as references appended to the recently Cabinet-approved Revised White Paper on Citizenship, Immigration and Refugee Protection.
South Africans Deserve Better
Malatsi admitted that the Department of Communications and Digital Technologies “did not deliver on the standard” expected of an institution entrusted with leading South Africa’s digital policy environment. He said the most plausible explanation was that AI-generated citations had been included without proper verification, calling the lapse proof of why human oversight in the use of artificial intelligence is critical.
The irony is difficult to miss.
A national policy meant to guide the responsible use of AI has been withdrawn because of the very problem responsible AI governance is meant to prevent: the uncritical use of machine-generated material without human verification.
This raises a serious credibility problem.
The policy was meant to help South Africa define how artificial intelligence should be developed, regulated, adopted, and governed across the economy. The draft included proposals for institutions such as a National AI Commission, an AI Ethics Board, and an AI Regulatory Authority. It also proposed incentives such as tax breaks, grants, and subsidies to encourage private-sector collaboration.
Those are not small ambitions. They touch the future of work, industrial strategy, public services, education, data governance, privacy, innovation funding, and South Africa’s competitiveness in the global digital economy.
That is why the quality of the document mattered.
Fake research with real consequences….SA's AI policy will shape hiring, surveillance, banking and healthcare. The foundation is FABRICATED. The damage will be very real. This isn't incompetence anymore. This is something WORSE.
— SlindohYorlo✊🏿🧨 (@Slindoh57914935) April 25, 2026
South Africa has universities, researchers, data scientists, entrepreneurs, civil society organisations, and private-sector players already working in AI. Professor Vukosi Marivate of the University of Pretoria, for example, was appointed to the United Nations Independent International Scientific Panel on Artificial Intelligence in 2026, a 40-member global panel selected from more than 2,600 applicants across over 140 countries.
The scandal, therefore, suggests something very concerning: the country may not have the institutional discipline to properly mobilise that talent in policymaking.
A credible AI policy is part of the operating environment for innovation. It helps clarify how companies can use data, what ethical standards apply, how public procurement may evolve, what risks regulators are watching, and how AI products can be deployed responsibly in sectors such as health, education, finance, agriculture, logistics, and public administration.
Without clear rules, innovators face uncertainty, investors hesitate, regulators move unevenly, and public trust weakens.
That makes this controversy more than a bureaucratic embarrassment. It is a setback for South Africa’s ambition to become a serious AI economy.
It also comes at a time when the continent is trying to build a more coordinated approach to artificial intelligence. The African Union endorsed its Continental AI Strategy in July 2024, positioning it as an Africa-centric and development-focused framework for ethical, responsible, and equitable AI governance.
South Africa should have been one of the countries helping to translate that continental ambition into a credible national framework. Instead, the withdrawal of the draft policy has exposed the distance between ambition and execution.
The way forward cannot simply be to rewrite the document quietly and republish it.

The department will need to rebuild trust through a more transparent process. That means opening the next version to serious multi-stakeholder review, involving local AI researchers, universities, legal experts, entrepreneurs, labour representatives, civil society, data protection specialists, and communities affected by algorithmic decision-making.
It also means grounding the policy in South African realities.
AI policy cannot be copied from Europe, the United States, or generic global templates. South Africa has its own problems: inequality, unemployment, language exclusion, weak public-sector capacity, historical bias in data, uneven internet access, and a need to use technology for development rather than prestige.
A credible national AI policy must speak to those realities.
It must answer practical questions. How will AI affect jobs in South Africa? How will local languages be protected and included? How will public-sector AI systems be audited? How will the government prevent bias in automated decisions? Who owns public data used to train AI systems? How will startups access compute, funding, and research partnerships? What protections will citizens have when AI is used in policing, welfare, education, healthcare, and finance?
The withdrawal of the draft National AI Policy is embarrassing. But it can still become useful if the government treats it as a governance lesson rather than a communications problem.
South Africans deserve an AI strategy built on local evidence, public accountability, and the country’s own intellectual capacity.
