You’d be forgiven for thinking AI had already eaten the world’s writing jobs.
The YouTube and Insta deluge of Get-Rich-Quick-By-Offering-AI-Copywriting-Services content has been both prolific and comfortless, at least for bloggers and writers.
The rise in AI-written content appearing online in a matter of mere months — content which lifts, copies, synthesises articles without credit nor citations — has quickly revealed the first mainstream use case of AI ethics.
That use case is one of the maladroitness of Ai ethicists. So far.*
So far, AI ethicists appear to be failing, at least in getting their voices heard. At worst, they’re failing at applying their smarts and pushing proactively for legislation which protects human writers.
In The Conversation, professors from across the world gathered their thoughts about the impact of open-use (but not open-source) AI apps on the future of human work. It makes for bleak reading and offered no solutions – just commentary.
Cool, humanity – so that’s where we’re still at, huh.
Maybe it’s ChatGPT
ChatGPT has become a global obsession; a instant messenger interface which allows humans to ask questions, get answers – and get their work done (where work may involve research and reviewing corpus data) for them.
At the heart of the obsession, though, is something darker. With an AI language model that does your homework for you and writes almost 100% accurate copy in seconds – can OpenAI’s linguistic machine be used essentially like a digital slave?
The answer, so far, appears to be: yes…even if the emboldened humans using ChatGPT don’t intend to be encouraging digital slavery (but they massively are on both a human and digital scale).
ChatGPT Snake Oil Already
The absolute state of this ChatGPT “expert” content, for example, is primed and targeted precisely at the kinds of viewers who might be financially vulnerable enough to fall for confidence tricks like these.
Tricks you say? Well, it is alluring to think a large language model might make someone tens of thousands of dollars, seemingly overnight and without consequences, particularly as citationless content lifted and synthesized from the net currently sees no recourse for the writers who have unwittingly fed ChatGPT.
However, anyone looking into this kind of content in a rudimentary way can quickly see the problem: if just 10% of the millions of viewers are following the financial advice from these influencers, then the AI-produced content is already at peak saturation, with detection models potentially ready to penalize it.
As with all tech which makes magic seem real: the ethics matter more than the theatrics, but they will be slow in catching up.
Ideally, for human authors and the skill of writing, it won’t be too late when they do.
*=since writing this article, the Wall Street Journal has put together a piece by AI Ethicists explaining why ChatGPT’s public release without guidance is too experimental and potentially dangerous.