Artificial intelligence seems to have blown past the Turing Test line while no one was paying attention. Wide eyed conjecture on what this technology will be able to do has provided a bounty of breathless reporting in the past few weeks. Rather than attempt to assuage this, I think I will jump on this particular bandwagon. My reason, however, has not been covered (at least, in the small amount of reporting I have consumed on this topic).

I view the issue of AI from the perspective of how reporting has changed since the advent of Fox News. The foreign-created propaganda platform charged onto the scene to compete -very successfully- against legitimate news organizations, with the apparent aim of throwing a monkey wrench into the works of representative democracy. In the person of a non-journalist such as Ingraham and Hannity, the [alleged] News Corporation found that an amoral person can be counted on to faithfully deliver a false narrative in the guise of legitimate news.

Amoral individuals still fear consequences. Even in the current state of jurisprudence (where a large fraction of the judiciary serve to alter the law via increasingly insane “interpretations”), there is a non-zero chance that a propagandist will be held accountable for large-scale criminal activities. The discussions ‘twixt the Fox News propagandists revealed during the grand jury hearings of the ex-president indicate that those individuals fear consequences.

AI will not. There was a point beyond which these individuals (who had already put all their chips down on “stoking crackpot conspiracy theories” as the jackpot winner) were hesitant to proceed. AI will never face consequences. If the current model of paying humans to convey a false narrative has worked so effectively that it got an accused child rapist into the White House, then the future of not paying AI to convey a false narrative without the human restrains caused by fear of consequences will be a next step win.