Since the recent popularization of large language model artificial intelligence applications, voices have sprung up stating one answer or the other to the question, “Is AI dangerous?”

The answer is an obvious, “Yes.”

The variety of answers helps muddle this in a way that almost seems by design.

Propaganda has a long history. The overarching goal is to inform (sometimes disinform) selectively in a manner intended to induce an emotional response. It may be spread for good, for evil, or anywhere in between.

Propaganda has historically been spread by two sorts of people: activists and shills. Shilling – the informational equivalent of prostitution – has a downside. The sort of person who knowingly spreads disinformation for personal profit is not generally trustworthy. As we move along the continuum from “neutral” towards “evil,” the ethical requirement moves from “morally questionable” towards “amoral.”

Propaganda in support of evil has always had a boundary. The number of amoral people (both activists and shills) capable of presenting propaganda in a believable manner is limited. As well, the extent to which an individual is willing to go is typically limited by an instinct for self-preservation. Criminal activities – such as actively calling for the murder of a particular out group – may bring consequences. For a bot, even the potential for consequences does not exist. Without the step of finding a reasonably charismatic amoral who presents a veneer of intelligence, a bottleneck is removed.

The skids have been greased, and we should expect a rapid descent. first deeper into kleptocracy, thence into an increasingly concentrated plutocracy, and – perhaps – ultimately culminating in a pure dictatorship.