Sorry, you need to enable JavaScript to visit this website.

Harmful Content on Social Media

This article explains the public how they can handle harmful content that they might get exposed to on different social media platforms.

Over the past decade, the internet has helped us in many ways. It has reunited families and allowed others to keep in touch, assisted with the rescue plans of several types of disasters, made it easier to raise money for charity, and helped bring about many types of change. However, and just like everything else, the internet has its pros and cons. Hence, it has also made it easier to share harmful and inappropriate content. 

So how are the major social media platforms are dealing with harmful content shared online, keeping users safe while safeguarding freedom of expression, and what can you do to prevent harmful content? 


Social media policies to monitor harmful content  


With more than one billion users worldwide and several billion pieces of content and social media posts shared every day, Facebook wants to guarantee freedom of expression but must be vigilant about what is published to avoid abuses.  

According to Facebook CEO Mark Zuckerberg “It’s impossible to remove all harmful content from the Internet, but when people use dozens of different sharing services, all with their own policies and processes, we need a more standardized approach.” 

So, what is Facebook doing about it? Facebook breaks down the types of unacceptable posts and content into six different categories, including: “Violence and Criminal Behaviour,” “Safety,” “Objectionable Content,” “Integrity and Authenticity,” “Respecting Intellectual Property,” and “Content-Related Requests”, and specifies the rules that users need to adhere by if they don’t want to be banned, aka community standards.  


How can social media users handle harmful content? 


Twitter stands for one principle: freedom of speech. In this sense, this social network does not want to have to moderate the tweets. But what happens when harmful content is published?  Where is the line between freedom of expression and censorship? 

The social network is clear on the issue in its terms of service: It is important to understand that Twitter does not moderate content submitted on its network, whether it is photos or text. The first justification is therefore the principle of freedom of expression as a human right, the second is just as legitimate: Twitter cannot control the millions of Tweets every day. 

What can you do about it? 

After all, Twitter remains a business and cannot express its opinion on the tweets of its users, and the social network tends to let information flow as freely as possible, regardless of content. The social network does not moderate the content but holds its users responsible for the content they provide. This said Twitter works with a wide network of partners to prevent abuse.    



Instagram intends to combat several phenomena related to harmful content via a dedicated in-app tool. These include comments that advertise or have nothing to do with the shared content, as well as intimidating or threatening messages posted on Instagram. This tool should allow you to create a first filter, which will automatically hide unwanted comments. Instagram says the tool is the first in a series of features designed to keep comments constructive, useful, and positive on the social network. 

What can you do about it? 

To access it, go to the application settings, via the menu located at the top right of your Instagram profile. A new "Comments" section is available and facilitates moderation of comments, to better fight against SPAM. It's pretty simple to use: just list the keywords that are regularly found in spam comments and they are automatically moderated. The choice is yours: you can use your own keyword list or use the list, developed by Instagram, of the most frequently used keywords in this kind of SPAM. 



Snapchat does have community guidelines, that they highly recommend we all follow, but the notable ones are obvious, such as not snapping adults and child offending content, no harassing or threatening, etc. They say: “If you violate these rules, we may remove the offending content or terminate your account..... If your account is terminated for violating our Terms of Service or these Guidelines, you won’t be allowed to use Snapchat again.” 

What can you do about it? 

There is no way to alert Snapchat of bullying or harassment via the app. Instead, there is a form you can fill  


Bottom line, the internet is a place of high risk for offensive and inappropriate content of all sort. Install filters and use parental control to filter content and monitor your child’s use, and even block the use of specific sites. Moreover, you can easily and simply install an ad-blocker, to stop it from popping up in advertisements. Do not hesitate to report offensive content to the site administrator, using ‘flag’ or ‘report’ links near the content.  

Last edited
Reading time
4 minutes