Awareness Raising Requires Multiple Stakeholders And B2B Businesses Might Want To Come Up With Redress For (Regional) Social Media Giants
According to a news article on News24, hate speech on South African social media has largely been targeting Zimbabweans most recently, whereas hate speech in Ghana has been directed towards LGBTQI+ communities. With disinformation, fake news and hate speech constituting an issue in various African countries, South Africa has been discussing new hate speech laws to address both hate speech and hate crime. As was revealed on BusinessTech in September 2022, John Jeffery, Deputy Minister of Justice and Correctional Services, addressed that existing laws are “not efficient enough in that, under the Promotion of Equality and Prevention of Unfair Discrimination Act (PEPUDA), it involves civil matters where a person must approach the equality court within their capacity”. Rather than forcing citizens to pursue their right to ‘mental/physical integrity’ by themselves, which arguably constitutes an unspoken part of citizens’ right to be “inviolable […and] entitled to respect for [their] life and the integrity of [their] person” as expressed in §4 of the African (Banjul) Charter On Human And Peoples’ Rights (ACHPR), South Africa is opting for a shift in responsibilities.
The Preventing And Combating Of Hate Crimes And Hate Speech Bill , as BusinessTech reemphasizes, will make sure that “a hate crime would be followed up by the state and criminally persecuted”. As the bill, which was originally published in the Government Gazette in March 2018, further specifies, “[t]he State, the South African Human Rights Commission and the Commission for Gender Equality have a duty to promote awareness of the prohibition against hate speech, aimed at the prevention and combating of these offences”. However, whereas the afore-mentined bill does not clarify what are the duties of (social) media businesses and businesses per se, Emma Sadlier Berkowitz, Social Media Law Specialist at the Digital Law Company, told BusinessTech that changes to WhatsApp, which enable group admins to delete messages for all group members up until a couple of days after they were posted within a particular group chat, may put group admins into some serious trouble. As Sadlier Berkowitz emphasized, once a group admin ‘manages’ such messages, they also become “legally responsible for the content”. Provided that no adaptations will be made to protect group admins from sole liability, one could argue that the right of freedom from online hate speech is one, which will somewhat ‘dictate’ how gatherings can take place and how they cannot. Starting at the micro-level and leading up to the macro-level (i.e. from citizen to business to state), different types of societal actors will be responsible to create a safe online environment to ‘live and commute’ in daily.
Whereas such a ‘responsibility shift’ makes sense, especially in an age wherein kindness to one another and the planet matters a great bit on the pathway towards a sustainable life, at the individual level, it could also lead to conflict with individuals’ rights to equality (§9 of South Africa’s Bill of Rights), privacy (§14 of the Bill of Rights), freedom of association (§18 of the Bill of Rights) and just administrative action (§33 of the Bill of Rights). Rather than shifting the responsibility on a few individuals, who manage a group and do not always have something to do with what happens in such groups, except for in very obvious cases (i.e. admitting profiles to a group that explicitly incite violence etc.), the focus with regard to punishments should remain on perpetrators of hate speech and crimes. However, this may be different in the case of social media businesses and platforms. To point out where the difference might lie, in the case of social media businesses, it can be assured that employees are hired to consciously take care of the creation of a non-harmful, non-discriminative and non-violent online environment when for privately initiated gatherings this does not apply, whereby certain exceptions surely might come into play (i.e. large social movements that employ volunteer workers, gatherings of student associations etc.).
In line with changes concerning the Films and Public Amendment (FPA) Act 2022, South Africa’s Film and Publication Board (FBP) will now be able to “regulate all online content published [in the country]”. While this bill can regulate publications in the domains of art, media and television in the widest sense, it is arguably ineffective to tackle individual publications and incidents of hate speech including by different types of companies. This is so, because the procedure “to register with and submit all content to the FPB for classification” is simply too complex and not in the interest of businesses including social media businesses. And whereas the South African Advertising Regulatory Board (ARB) also released a Social Media Code meant to influence ‘influencers’ in 2019, the ARB Social Media Code mainly holds itself busy with preventing misinformation so that businesses and brands cannot misuse advertising. In other words, to combat hate speech and crimes it is not enough – for it does not capture the scope of what happens on social media. As the United Nations (UN) Strategy and Plan of Action on Hate Speech points out, whereas the responsibility to combat hate speech rests with state actors, other stakeholders such as CSOs, tech companies, and (social) media outlets and platforms should also be involved in this fight to accelerate impacts.
More precisely, one of the recommended actions of the UN’s strategy on hate speech is to “promote self-regulation and ethical journalism”, for instance, by “[e]ncourag[ing] private media organizations and tech companies, particularly online social media platforms, to put in place and implement policies that are in keeping with international human rights law and the Guiding Principles on Business and Human Rights, including the principle of due diligence”. As David Kaye, former Special Rapporteur on Freedom of Opinion and Expression, argued in a press release in 2019, concerns about hate speech are to be taken seriously, however, there needs to be a balance when it comes to responsibilities. Making companies liable for hate speech, so Kaye estimated, can lead to handing them over a huge say in decisions about “public norms, and risk undermining free expression and public accountability”. Taking Kaye’s remarks seriously, South Africa should probably opt for a balanced account to combat and address hate speech, which specifies rules for individuals, companies, tech companies, social media platforms, media outlets etc. separately. Next to legal guidelines and laws, it might be necessary to specify rules for the cooperation between different actors in the pursuit of no hate speech, no hate crime and no discrmination online.
What one may have to grasp is, preventing discrimination fully might be a utopic aim. Based on this insight, the double-edged sword of (un)restrictive policies and the fact that hate speech, discrimination and violence arise from societal issues that cannot just be addressed through punishments, it might be necessary to legally determine how companies and other stakeholders get involved – that is how frequently, through which mechanisms etc. to contribute their fair share to awareness raising and combating hate speech. Yes, awareness raising is only the first step, but it can have far reaching impacts where one huge social media outlet influences another. Finding a way to negotiate on best practices in different countries and regional contexts among the various associated actors could also be key in motivating citizens to contribute to the fight against hate speech themselves, for instance, by making consumer choices based on due diligence fulfilments, including in the domain of anti-hate speech. Of course, under such a scenario, it would need to be paid attention to the motivations of companies so that a ‘greenwashing’ in the domain of ‘anti-hate speech’ can effectively be prevented. Moreover, there should be a limit to influencing consumers through state-private sector cooperation so that individuals’ right to free choice does not become an issue.
South Africa’s Bill of Rights currently does not refer to a right to free choice with regard to economic decisions, probably because capitalism operates on this very principle. Instead it addresses citizens’ rights with regard to political decisions (§19 of the Bill of Rights). However, on the pathway to a more sustainable future, it is imaginable that consumer protection and freedom of (consumer) choice will eventually have to be reconceptualized as the obligations of businesses change, including in the light of due diligence. As Paulo de Tarso Lugon Arantes writes in his 2022 article ‘The Due Diligence Standard and the Prevention of Racism and Discrimination’, “a law-making process to tackle racial discrimination should focus on dismantling its structure ‘beyond a mere summation of individualized acts’”. And yet, it is also important to discourage individual acts of discrimination. In South Africa, as the South African Human Rights Commission (SAHRC) emphasizes in an information sheet on hate speech, the PEPUDA is aimed at the prohibition of hate speech and specifies the function of the equality court and appropriate procedures for §21(2c) settlement and §21(2d) compensation. What has to be said is that there are only a few cases, including in 2022 that have been taken in front of the equality court, which underlines that compensation for hate speech might be a little bit difficult to address. As Hildegard Stienen points out, “[f]rom a psychological view point, it should be the aim that [victims of online hate speech] win back [their] inner coherence and self-efficacy and that they don’t isolate themselves”. In other words, there is a possibility that addressing hate speech and crimes, at least to an extent, could work by establishing forums wherein victims can speak up safely, receive psychological support and are supported by a loving community.
How such forums could be created might however be a little bit more difficult to approach, including with regard to the role of social media platforms and businesses in such a process. This is so because the internet and respective algorithms may hardly be able to filter out hate speech effectively, even within ‘supportive’ forums, also considering the amount of staff needed to monitor such communications, which in turn may violate individuals’ right to privacy. As Ge Chen explains in an article about hate speech in China and authoritarian countries, there is not such a clear line between the censorship governance in authoritarian and democratic countries anymore, because even liberal democracies need to regulate content posted on the internet and even authoritarian states may sometimes choose for less restrictive actions considering how censorship can impact their economies. As Chen continues, despite the fact that the internet originally constituted a ‘tool for liberation’, its second phase constitutes a reminder that freedom should not turn into anarchy. As Bright Nkrumah argues in an article from 2018, South Africa has been plagued with a history of discrimination and racial violence. Whereas this has led to emphasize the importance of non-discrimination, racial and gender stereotypes are formed early in childhood and these stereotypes still constitute an issue in South Africa’s society today. Harmful attitudes against migrants in South Africa only underline that marginalized, vulnerable and discriminated groups need bottom-up support that shows up via the media rather as an attempt to rewrite what has been written than to take ‘all’ harmful content down through a ‘tight’ approach to censorship.
The latter is not to say that harmful content should remain online, but it is to say, there is a need for a more complex approach to ‘making (social) media safe again’. Such an approach may also require law-makers to rethink whether “racism [really] is a belief” as Martin Van Staden, member of the Executive Committee and the Rule of Law Board of Advisors of the Free Market Foundation (FMF), told BusinessTech in a discussion about the Prevention and Combating of Hate Crimes and Hate Speech Bill back in 2018. Whereas attitudes about gender, race etc. may indeed be formed in early childhood as a result of education and socialization, this does not justify classifying racism as a belief once it is put into action. As Hildegard Stienen reemphasizes, being constantly exposed to hate speech respectively cyber-bullying, “can lead to […] different forms of traumatization”. Rather than causing harm only in a particular moment, hate speech can lead to subsequent psychological trauma and constitutes, in essence, an attack on the basic needs of individuals who are consequently burdened by what Gehring has referred to as the “‘physical force of the language’” (Dt. “‘Körperkraft der Sprache’”). And while Stienen reminds that discrimination against specific groups can lead for such groups to identify with the plight of ‘one another’, which leads to a homogenization with positive (i.e. the formation of resistance and social movements) and negative (i.e. the homogenization of particular groups via the media with an increased risk for further stereotypes being created) side effects, the specialist for psychiatry and psycho-therapy also remarks that the “increasingly stronger association with group norms on the internet” can lead to a clash between different groups and a deepening of stereotypes.
Another aspect, which lawmakers in South Africa might have to take into account when finding more effective solutions to address hate speech and crimes, is that taking perpetrators on board might be essential to adopting realistic solutions and effective legal redress. The 2018 article on BusinessTech showed huge discontent with regard to the complexity of characteristics linked with hate speech in the Prevention and Combating of Hate Crimes and Hate Speech Bill, which is arguably not complex at all. Among the characteristics, which the hate speech bill protects are: race, gender, sex including intersex, ethnic or social origin, colour, sexual orientation, religion, belief, culture, language, birth, disability, HIV status, nationality, gender identity, albinism, occupation or trade. These characteristics are not only not exhaustive, but doubts with regard to the efficacy of such lists actually appear adequate considering that criteria may be plenty. Whereas South Africa’s apartheid history should arguably be reason enough to more strongly protect against racism and xenophobia, an effective legal redress needs to focus more on the healing of victims rather than the perceived severity (i.e. by anyone else than the victim) of hateful remarks. While the same kind of physical assault might affect particularly vulnerable individuals more than others (i.e. duration of healing process and subsequent physiological damages etc.), the impact of psychological harm may vary depending on the individual resilience, pre-history of trauma, psychological skills, physical resources and social support etc. For this reason, it needs to be understood that lawmakers should also observe how business decisions and the design of social media platforms may ‘invite’ hateful comments, because they inevitably shape the way individuals relate with one another and across national borders.
It may, for instance, be debatable whether social media platforms such as Facebook should really provide tools to express negative sentiments about each and every comment. Rather than ‘feeding in’ hate speech and allowing for the formation of ‘pseudo-social-movements’ in an online and “agonistic public sphere”, social media platforms should be designed to have productive discussions in such a way that misunderstandings are prevented. As Chen argues, “[w]hereas the anonymity, immediacy and interoperability of the Internet makes it extremely convenient to distribute messages across the world, the automatic sharing and volatility of hate speech renders it difficult to provide a timely remedy for the harms of equality rights”. And as Josephine B. Schmitt argues in an article about online hate speech, “[a]nonymity, low levels of social control based on a lacking face-to-face interaction with representatives from a particular foreign group as well as the low probability to – also criminally – be declared liable for one’s own utterances and actions, contribute to the lack of restraint”. To sum up, whereas the legal persecution of hate speech and crimes matters, social media platforms play an important role in research and development (R&D) with regard to prevention measures as well as their implementation. Similar to standing up for fair trade, businesses should also come together to stand up against hate speech. Especially, because a huge bit of hate speech probably occurs on the most common social media platforms, B2B businesses should concern themselves with creating innovation in the domain of monitoring and addressing hate speech in particular regional contexts and on particular platforms. They could also ensure, in cooperation with other stakeholders from local CSOs, governments and tech companies, that issues of ‘hot and harmful debate’ are actively discussed with community members to prevent hate speech bottom-up.
Are you a start-up or a SME with an interest to make a sustainable and/or social change somewhere between Germany and Africa? Then our team will happily support you on the legal side! We are specialized both in supporting multicultural businesses in Germany, in supporting African businesses in Germany and in supporting businesses across various African jurisdictions. While our support starts with providing help on immigration and relocation matters, it does not stop there – taxes, tech, intellectual property…You heard us! We provide comprehensive legal support. Contact us todayto find out more and discuss your concerns!