Artists, journalists and Aboriginal cultural workers have launched the "Stop AI Theft" campaign to demand stronger protection as generative AI use grows. Creators say some AI models scraped internet content without permission and sometimes produce near-identical works. Voice actors reported that their voices were cloned without consent, and local journalists said reports were plagiarised and reused on AI-generated sites.
The Tech Council of Australia reported in August 2025 that 84 percent of Australians in office jobs use AI at work and estimated large economic gains by 2030. A January 2026 report from the University of Sydney warned that journalists are increasingly invisible in generative AI search results. Indigenous activists said fake Indigenous art was being produced and sold.
At a July 2024 Senate hearing the MEAA asked for an opt-out from training, legal rules to force compensation, and transparency about training data. Campaigners held talks with tech companies in August 2025 and welcomed the Federal government's October 2025 move to maintain copyright protections. They also pushed for a national review of laws under the December 2025 National AI Plan.
Difficult words
- generative — creating new content using computer models
- scrape — copy information from the internet automaticallyscraped
- clone — make an exact copy of a person’s voicecloned
- plagiarise — use someone’s writing and present it as yoursplagiarised
- transparency — clear information about how something works
- compensation — money paid to someone for their work
- opt-out — a choice to not be included or used
- copyright — legal right over original creative work
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- Should creators receive compensation when their work is used to train AI? Why or why not?
- How might cloned voices affect voice actors and their work?
- What actions should the government take to protect Indigenous art and creators?
Related articles
LLMs change judgments when told who wrote a text
Researchers at the University of Zurich found that large language models change their evaluations of identical texts when given an author identity. The study tested four models and warns about hidden biases and the need for governance.
Uganda report urges reform of science and innovation
A national report launched on 21 June says Uganda must reform its science, technology and innovation systems to move faster toward middle-income status. It highlights gender gaps, weak funding and calls for stronger links between research, government and business.