Let's start with the headline, because it deserves to be said plainly:
South Africa's government released an 86-page national policy on Artificial Intelligence. The policy was designed to govern how AI gets used in this country, who regulates it, who benefits from it, and who gets protected from its risks. It was approved by Cabinet. Published in the Government Gazette. Open for public comment.
And it was apparently written, at least in part, by AI, which proceeded to hallucinate at least six of its own academic citations. Journals that don't exist. Research papers no one ever wrote. Authors who appear to be complete inventions.
The policy governing AI integrity was built on a foundation of AI-generated fiction.
How It Unravelled
The scandal broke when News24 reported that the policy document cited academic journal articles that don't exist, with several authors credited with foundational research who had never written on the topics attributed to them.
An internal review confirmed the problem. Minister Solly Malatsi acknowledged it directly: "The most plausible explanation is that AI-generated citations were included without proper verification. This should not have happened."
He's right that it shouldn't have happened. But let's be specific about what "proper verification" actually means, it means a human being checking the references before a Cabinet-approved policy goes into the Government Gazette. That didn't happen. Malatsi said there "will be consequence management for those responsible for drafting and quality assurance." The policy has since been withdrawn.
The Irony Is Almost Too Much
The document set out bold plans, new oversight bodies including a National AI Commission, an ethics board, a dedicated regulatory authority, and an AI insurance superfund. It spoke of tax incentives, skills programmes, and positioning the country as an AI leader across Africa. Barely two weeks later, the whole thing was quietly withdrawn.
The very policy arguing for "human oversight of AI systems" and "sufficient explainability" and "accountability in AI decision-making", apparently skipped the human oversight part at the most critical stage: before it was published.
Khusela Diko, chairperson of Parliament's communications committee, said publicly: "Withdraw it… and subject it to the rigorous review demanded of a national policy on the most transformative technology of the 21st century", adding a pointed jab: "without using ChatGPT this time."
Why This Matters for Ordinary South Africans
This isn't just embarrassing. It's a problem with real consequences.
We have millions of young people with no formal job prospects, a growing number trying to build income through freelancing, digital skills, and the very technologies this policy was supposed to govern. The conversation about how AI gets used here, who builds it, who regulates it, who gets protected from its worst effects, is genuinely high stakes.
In a country facing high unemployment and real infrastructure gaps, misdirected resources matter. Public trust in the government's ability to manage complex technological issues may have suffered another setback.
We needed our government to show up with credibility on this issue. Instead, we got a lesson in what happens when you hand a sophisticated tool to an institution that hasn't yet figured out how to use it responsibly.
We're Not the Only Ones
Before we spiral into South African exceptionalism about failure, this is a global problem.
In September 2025, a major education reform report in Newfoundland and Labrador, Canada, ironically one that called for the "ethical" use of AI in schools, turned out to contain more than 15 fabricated sources. One cited a nonexistent National Film Board movie; another lifted a fake example straight from a university style guide meant only as a template.
Governments worldwide are reaching for AI to speed up drafting and research, and not putting the oversight structures in place to catch when the technology confidently lies. That's what AI hallucination is, the model doesn't flag uncertainty, it just fills the gap with something plausible-sounding. Fake citations are one of the most well-documented failure modes of large language models.
The difference is that South Africa was writing a policy specifically about this problem. That's the part that stings.
What This Actually Tells Us
Two things can be true at once:
One: AI tools are genuinely useful. They can help structure documents, synthesise ideas, and produce faster first drafts. The fix is not to abandon AI, its ability to summarise, draft, and spot connections remains genuinely useful for everyday tasks and serious research alike. The challenge is learning to use it without surrendering judgment.
Two: The people in this country with the most power to shape AI governance have not yet demonstrated they understand the basics of using these tools responsibly. You don't publish AI-assisted work without verifying every factual claim. Full stop. That's not advanced knowledge, it's the first thing anyone building with these tools needs to know.
The rest of us, freelancers, entrepreneurs, small business owners, digital workers, are expected to build with AI, earn from AI, and compete using AI. We're doing it without institutional support, without a national framework, and now without even a policy document we can actually cite.
So Where Does That Leave Us?
Deputy President Paul Mashatile had recently described the policy as central to the country's broader Fourth Industrial Revolution strategy. That strategy is now on hold, the policy is withdrawn, and the public comment period, which would have allowed civil society, tech workers, educators, and freelancers to actually shape the direction, has been shut down before it properly began.
The people who most need a seat at this table are once again watching from the outside while the institutions supposed to represent them fumble the basics.
Here's the raw truth: South Africa's AI future will not be built by the institutions that couldn't fact-check a reference list. It will be built by the people in this country who are already using these tools every day, to earn, to create, to compete, and who understand both their power and their limits.
That's you. That's us.
Sources: News24, IOL, CNBC Africa, The South African, TechNext24, APA News
If you haven’t already, sign up to Freelance Forward - https://www.profreelance.co.za/subscribe
Resource Archive

PROFREELANCE (Pty) Ltd
2023/279056/07
The content in this newsletter is for informational purposes only and does not constitute financial, legal, or professional advice. Pro Freelance and Freelance Forward are not affiliated with or endorsed by the platforms or tools mentioned (unless stated otherwise), and we are not liable for any losses, damages, or issues arising from your use of them. Always do your own research before making decisions related to your freelance business.




