An ethical dilemma: The difficult tradeoffs with fighting hate speech and misinformation online
The Impact of Disruption
2/22/19, 1:00 PM - 1:20 PM (EET)
Big Tech appears to be caught between a rock and a hard place. When platforms such as Facebook and Twitter ban speech that is deemed inappropriate, they are accused of bias and censorship. And when the platforms keep up content that others believing is sowing discord, chaos, and misinformation the companies are accused of being neglectful. And while tech companies have an incentive to make a frictionless experience online, that very lack of friction may increase negative behaviour online.
Users are now demanding a more proactive response from both large tech platforms and startups. But how should they move forward and what are the risks?
Do users trust tech companies to make the types of speech determinations that have typically been determined by the legal system?
Will platforms ever increase friction online if that means lower user engagement?
People are often removed from a platform after running afoul of the site’s Terms of Service, which serves as a legal contract regarding the rules of usage. Should platforms be allowed to create whatever rules that would like regarding speech, or should it coincide with our conception of the freedom of speech?
Our determination of allowable speech varies dramatically from country to country. Should large global tech platforms adopt universal standards or customize its policies for the norms of each country?
Is the use of algorithms to promote (or lessen) the popularity of speech an editorial decision?