top of page
Writer's pictureJoe Andrews

Speaking of: CNET and AI Journalism

I'm not anti-technology. I'm just anti-leach.

CNET had to apologize last week when it was uncovered the outlet had been publishing stories written by artificial intelligence without clearly disclosing that information. The technology was only deployed to write some basic financial explanatory pieces, so the stakes were pretty low. But even if I think AI will continue to change how every profession does their job over the next 50 years, bringing this technology into journalism while its still in such a nascent stage feels reckless at best and dangerous at worst.

I'm not sure there's a job field more reliant on strong ethics than journalism. The media has the power to drive a society whatever direction it wants, and with this great power comes great responsibility. Good journalists know this and are very thoughtful and considerate in the types of stories they chase after and how they seek out information and what level of uncertainty they'll accept before reporting something as true. AI does not have these guardrails. AI cannot discern the truth. AI does not know when something isn't adding up. AI cannot sense when a source might be unreliable. AI simply takes information in, aggregates it all, and spits it back out. Done deal. Hopefully an editor would be able to sniff out any places where an AI writer may have screwed up, but an editor's job is to turn a good piece of writing into a great piece of writing, not to turn all the false statements into true ones. And judging from CNET admitting a "small number" of the AI-written pieces required "substantial" corrections, it doesn't seem like most editors are good at this task anyways.

On top of that, AI cannot do original reporting yet, and we're still a long way from that being possible. All AI can do at this stage is scan a bunch of existing pieces and reword the information in those pieces into an "original story." And in fairness, there are plenty of news outlets who are already doing that with human writers for virtually every story they publish. But the crux of a journalist's job is to break new stories, develop those new stories, and invite new voices into the conversation about those stories. AI can do none of those things, and there's something about AI not even having the capability to do original reporting that makes it feel more sleazy. CNET has designed a product whose sole purpose is to leach off other human reporters, and that's such a lazy, uninspired, and damaging plan.

And that's pretty much what AI journalism is today: taking a bunch of existing articles about a topic from actual reporters, copying and pasting most of that information into a new story, screwing up a few facts in that process, and publishing it online without a sufficient review process. There will come a day where AI and journalists will be able to collaborate more effectively, but we're not there today, and given the importance of strong journalists to society, it scares me to see CNET try and smuggle this technology into the mainstream.


Comments


bottom of page