Responding to Hate Speech: A Guide to Community Management

Dealing with hate speech on social media can be a full time endeavor, but leveraging your community can help shutdown the spread of abusive posts and comments.

A couple weeks ago, I was alerted to a comment on an Instagram post made by LeBron James. James, who carries 65 million followers, shared a split screen photo of George Floyd and Colin Kaepernick with the caption, “This… Is Why.”

At the time it was the latest in a series of high-profile individuals speaking out against the murder of Floyd while in Minneapolis Police Custody. More posts have followed as the United States saw riots and protests mushroom across the country, an explosion of pent up unrest in reaction to widespread racial injustice.

Alerted on an account I manage for a college, the message I received was indicative of a widely held sentiment against James’ call to action against racism. The user, whose identity I’ll leave anonymous, claimed that a member of the college was openly supporting the hateful act perpetrated by the police officer that ended in Floyd’s death.

“He’s probably under the cops [sic] knee for a reason…” was the statement that garnered the concerned user’s attention. The woman who alerted me called for the commenter, who she presumed to be a member of the college community, to be removed from the campus immediately. She claimed the commenter was an ambassador for the college, and his words left unchecked were a reflection of the college’s values.

The process of managing a non-profit’s social media is often straightforward. Usually, users on the platform visit organizational pages to familiarize themselves with the business, or to ask questions knowing the page is managed by a real person, and not an email bot.

That process, however, immediately changes when claims of hate speech are introduced.

Social media provides a massive audience with the tools to connect over shared experiences. But when those experiences turn political, how can community managers uphold their organizational values without infringing on free speech?

Digital giants like Twitter and Instagram proliferate user-generated content under the guise of free speech. Despite their attempts to push for fact-checking and accountability, as was the case when Twitter affixed a warning to President Donald Trump’s tweet about “looting” leading to “shooting”, social media sites have become a hotbed for incubating racist rhetoric (and by proxy, cancel culture).

As much as many users try to identify hate speech and abusive acts, the platforms continue to provide an easily accessible outlet to finding hate cells. And while social media companies will continue to be guided by revenue projections to determine their actions against hateful content — Facebook’s Mark Zuckerberg claimed his platform left Trump’s post untouched since Facebook doesn’t “have a policy of putting a warning in front of posts that may incite violence — non-profit community managers can still exercise their organizational voice to keep their brand from inadvertently supporting hate speech.

After finding, or being alerted to hate speech, here are four steps to enact change on the platform.

1. Confirm what you can (Eliminate Biases)

Two wrongs don’t make a right. An impassioned and unjustified response to hate speech on social media can lead to organizational backlash by instigating a virtual witch hunt. Though concerned users will want immediate action, it is always in an organization’s best interest to confirm the facts.

Provided the profile of the user allegedly spewing hate speech, take a look around to verify their identity. If the page is public, does the user list his place of work or city of residence?

Capturing screenshots at this stage is vital as many users will switch their public profile to private once they start receiving criticism for their actions.

This step is particularly important in the fight against witch hunts, doxing and mistaken identity. The internet loves the idea of enforcing justice from afar, but there are times when a simple fact checking mistake can lead to serious repercussions.

In 2016, the Royal Canadian Mounted Police (RCMP) urged the public in Trail, British Columbia not to act as vigilantes and hunt out alleged pedophiles. The statement stems from an incident in which a man was allegedly reported to be taking photos of minors in a restaurant.

Rather than reporting the man to management, the teens took a photo of the man and shared it on social media, which encouraged locals to hunt out the presumed offender. This led to a witch hunt, in which another individual who bore resemblance to the accused was denied entry into a grocery store.

Despite the police finding both men innocent, the hasty branding of social media disrupted their lives in an instant. What’s more is that the man who was thought to be taking photos of the girls turned out to have been holding his phone from his face, struggling to read the news without his glasses.

Being too quick to vilify people on social media is almost as bad as the alleged negative act or post itself. Haste makes waste, especially for the unfortunate victims of inaccurate social media bashing.

2. Leverage Your Network

You’ve done your due diligence and have verified that the hate speech originated from someone in your organization. Now what?

Building this case by reaching out to your Human Resources department or supervisors to lobby a full investigation of the matter is key. Many organizations have a code of ethics on file regarding employee conduct.

According to a report on JDSupra, the Pennsylvania Department of Transportation was legally upheld by the state’s highest court to fire Rachel Carr, an employee who engaged in a hate speech-filled rant on Facebook.

The case, which applies specifically to public employees, notes that despite Carr being off-duty at the time of her remarks, her actions represented both an “issue of public concern” and an attempt to “adversely affect PennDOT’s mission as an employer.”

Still, the nuts and bolts of constitutional rights to free speech can change depending on the type of individual attempting free speech (public or private employee), when and where the speech took place (on or off site, and during or after work hours), and the messaging of the speech. In this sense, the best bet when investigating hate speech on social media is to leverage your network of professionals and contacts who are versed in the area called into question.

3. Respond to Comments

With the framework of a hate speech investigation built, community managers may also find they have to address the comments publicly. When followers of your organization or users of a social media platform alert you to potentially negative comments, it’s important to recognize each of those users personally.

Whenever possible, that means addressing these comments through direct messages. Posts made by brand social media pages are often considered official statements of the brand itself. Though individuals maintain these accounts the voice used on social pages is presumed to be an expression of the brand’s goals and identity.

For that reason, replying to individual messages about an offense, even if it’s with a basic response, supersedes making a public post before an offense is thoroughly investigated.

Not everyone will appreciate your message. Social media whistleblowers are notorious for trying to make brands talk themselves into contradictions.

Months ago, I was faced with dozens of messages regarding an alleged student who murdered his adopted dog. The messages came from people who love or care for animals, and wanted swift action taken against the perpetrator.

Rather than posting a public message or ignoring the inbox altogether, I took the time to message each user individually, ensuring them that we were thoroughly investigating the matter and thanking them for bringing it to our attention.

In my case most users were appreciative of the message, thanking me for acknowledging their concerns and were understanding of the organization’s need to be thorough. On the rare occasion of a vitriolic follow-up message, I simply thanked the user once again, before leaving the chat and their attempts to elicit a brash response behind.

As easy as it is to engage with emotional commenters or trolls, maintaining a diplomatic stance serves to limit opportunities for a brand’s words being weaponized against itself.


Social media will continue to be a platform for expression for as long as internet access remains largely uninhibited. Likewise, social media sites will continue to provide pockets of hate speech and conflict.

Managing a brand’s voice against accusations of biased communication requires thorough fact-checking, organizational communication and a cool head.