穿越內容審查的模糊地帶:在錯假訊息時代如何守護網路言論
Traversing the Blurry Landscape of Content Moderation: Protecting Online Speech in the age of Mis-disinformation
提案單位:Software Freedom Law Center, India
*英文場次 / English Speaking
Session Description:
On 10th January 2025, Mark Zuckerberg, owner of Meta Platforms, announced that there will be significant changes to their content moderation policies, in which Meta will discontinue their work with fact-checking organisations. His reasoning behind this step is to reinforce the right to free expression online. Alternatively, Meta will introduce a system similar to that of community notes. This announcement comes in the midst of a major political shift in the United States of America, given the outcome of the 2024 US Presidential Election.
Considering the vast usage of social media platforms around the globe, this move highlights the complex and insurmountable nature of content moderation in the social media landscape. Identifying and moderating harmful content has become even more challenging since the advent of artificial intelligence based technologies such as deepfakes. Moreover, there is ever-growing advancement in the photo-realism of such content — that can blur the thin lines of what one considers to be true or false. Furthermore, law and policy on this aspect has failed to envisage the practical implications of preventing dissemination of mis-disinformation on free speech online. This dichotomy becomes even more sensitive given that mis-disinformation often intersects with other forms of harmful content such as online gender-based violence, discriminatory and hate speech.
Consequently, fact-checking and media literacy could be crucial first steps in building digital resilience on social media platforms. This session will explore how trust and resilience in the social media landscape can still be built through initiatives that educate people on the basics of fact-checking and how they can improve their engagement with platforms to ensure accountability and reliability of information in the social media landscape.
Specific issues to be discussed –
1. How do partnerships between social media companies and independent fact-checking organisations enable a safer information landscape?
2. Given recent developments, what are the anticipated opportunities and challenges in content moderation regulation going forward?
3. How can the law and policy on content moderation facilitate a healthier online space – to ensure that freedom of expression is not unduly curbed in an attempt to curb the proliferation of mis-disinformation?