It’s a question that the University of Gottingen in German is attempting to answer. For now, when we talk past each other on social media, the only solution is to delete posts or threads. Hurtful or hateful messages are uttered, but the the evidence can be erased. In these cases, the university researchers hope to create an AI moderator which can intervene to de-escalate the situation. It’s obvious that there’s some social good in this. Perhaps we’ll see less cyber bullying which pushes people to the brink of suicide or self-harm. Perhaps the proliferation of online conspiracy theories which develop from hateful ideology will also become more limited. Perhaps we will recover the lost productivity that comes with engaging in longwinded online debates.

But the drawbacks are more severe.

  1. We risk losing freedom of expression. Like it or not, hate speech is protected speech, at least in the United States. Despite multiple challenges all the way to the nation’s highest court, the Supreme Court has upheld that the U.S. Consitution’s First Amendment guarantee to freedom of speech extends to hate speech. Most recently, in 2017, justices unanimously reaffirmed that there is effectively no “hate speech” exception to the free speech rights protected by the First Amendment. In fact, Justice Samuel Alito wrote on behalf of four of the justices: “[The idea that the government may restrict] speech expressing ideas that offend … strikes at the heart of the First Amendment. Speech that demeans on the basis of race, ethnicity, gender, religion, age, disability, or any other similar ground is hateful; but the proudest boast of our free speech jurisprudence is that we protect the freedom to express “the thought that we hate.” How then can we ask a computer to decide something we are still arguing about, much less have protected under our founding documents?
  2. It’s lazy. Why should we teach computers something we clearly need to learn ourselves? Isn’t it better for humans to discover how to disagree agreeably than to teach computers how to break up the equivalent of a schoolyard brawl on social media? Don’t we actually wind up farther away from civility by entrusting its enforcement to a machine? I contend the art of intercultural competency is one that every human everywhere needs to embrace. This is a skill set born of experience, a deliberately open mindset, and the intentional act of seeking understanding.
  3. Giving AI this task robs us of the chance to dialogue and understand each other. Just as the cancel culture stops that controversial conversation cold, so too does giving AI the responsibility of moderating human discourse. Hashing out thoughts and ideas is uniquely human. Discussion gives us the ability to grow emotionally and spiritually. This benefits us as individuals but the human race — collectively. This debate and discussion is the means by which we achieve civility, not by limiting discourse, but by encouraging more of it. A person just spewing hateful, toxic speech or any point of view without a rationale is a hollow shell that will be exposed through engagement and an effort to understand. If that person can offer a rationale, then we have a chance to learn, and perhaps challenge his or her perspective. This is the heart of humanity. Don’t give it to a robot.

For more of my thought leadership on cross-cultural competency or intercultural competency, pre-order Crossing the Divide, 20 Lessons to Help You Thrive in Cross Cultural Environments.