This week, Mark Zuckerberg, CEO of Facebook,
testified before a joint congressional committee to respond to questions arising from the Cambridge Analytica scandal and Russian interference in the 2016 presidential election.
The questions ranged widely across topics including users’ data privacy, hate speech policing, disclosures of usage terms, data security, and even into whether Facebook is a monopoly. But one key moment that caught the attention of many was when Zuckerberg stated that Facebook was responsible for the content transmitted through their platform.
Heretofore, the “party line” of most of these larger internet companies has been that they are not responsible for, and should not be legally liable for, what others say or write via their platforms. According to them, they are merely communications systems that transmit what others create or share. This position has caused significant challenges for victims of libel, reputation attacks and harassment, who’ve struggled to get malicious and untrue content removed from search engines and social networks.
Fast-forward to Tuesday, and Mark Zuckerberg’s statements in the Senate hearing represent a significant departure from past representations made by Facebook representatives.
At one point, Senator John Cornyn asked if Facebook and other social media platforms are responsible for their content (emphasis added):
… Previously, or earlier in the past, we’ve been told that platforms like Facebook, Twitter, Instagram, the like are neutral platforms, and the people who own and run those for profit — and I’m not criticizing doing something for profit in this country. But they bore no responsibility for the content. Do you agree now that Facebook and the other social media platforms are not neutral platforms, but bear some responsibility for the content?
Mark Zuckerberg answered (emphasis added):
I agree that we’re responsible for the content , but I think that there’s — one of the big societal questions that I think we’re going to need to answer is the current framework that we have is based on this reactive model, that assumed that there weren’t AI tools that could proactively tell, you know, whether something was terrorist content or something bad, so it naturally relied on requiring people to flag for a company, and then the company needing to take reasonable action. In the future, we’re going to have tools that are going to be able to identify more types of bad content. And I think that there is — there are moral and legal obligation questions that I think we’ll have to wrestle with as a society about when we want to require companies to take action proactively on certain of those things …
I have written previously about how we need to change the laws governing the internet, because there are numerous instances where victims of online defamation have little in the way of legal recourse for addressing the harm done to them.
The primary law that absolves Facebook, Google and others from responsibility for user-generated content published on their sites was enacted within Section 230 of the Communications Decency Act. At the time it was passed, the internet was just emerging as a commercial entity, and lawmakers wanted to give innovative companies a chance to develop and grow without being strangled by legal requirements.
While that made sense at the time, now an entire cottage industry of often-predatory sites has arisen that elicit critical and damaging content about individuals and businesses and try to extract money from those targeted in one way or another. These have included websites publishing arrest records, mugshots, boyfriend/girlfriend ratings, porn revenge (though these are being successfully shut down in many cases), ripoff or scam guides and more straightforward business reviews.
On the face of it, some degree of immunity for search engines and social media sites would seem reasonable. After all, third parties author and publish the content transmitted through them. Ideally, those other sources could be sued and legally compelled to remove content that is fraudulent or unfairly defamatory, and the platforms that disseminate the information wouldn’t need to be involved. However, in practice, there are websites that have engineered themselves to disallow authors from revising or removing content that they’ve published, and many originators are also in foreign countries where our laws and legal protections cannot reach.
This is why it’s significant that Zuckerberg’s testimony departed from the “just a platform” dogma. I imagine that there are concrete legal ramifications to a company when its CEO makes statements of this sort, and I also think that Zuckerberg completely understood that he might have been publicly accepting culpability for damaging content beyond merely the fake news and fake profiles deployed by Russian entities.
I hope he was serious about it, and I hope that his company will now begin making it easier for reputation attack victims and harassment victims to get assistance from Facebook. (A few years back, on behalf of a porn revenge victim, I asked Facebook to remove content that linked to the attacking website. Facebook refuses to remove it — even though the content it was linked to was voluntarily removed years ago.)
Regardless of whether Zuckerberg truly intended to convey that he believes some Section 230 immunity should be rolled back, I think he did honestly convey the growing realization among many that Section 230 has left far too big a gap, and that the larger internet corporations bear a significant portion of the responsibility for the content found through their platforms.
In Europe, the “Right To Be Forgotten” (RTBF) law enables individuals to request removal of content that invades privacy or is inaccurate or unfairly damaging. To be clear, this issue is separate from data privacy because it relates to things others publish about you online, rather than data you make available about yourself.
Though the Right to be Forgotten wasn’t mentioned during Zuckerberg’s Senate hearing, three different senators referenced the European General Data Protection Regulation (GDPR) in the context of privacy, which could indicate that some are favorably inclined to more European-style legislation like RTBF as well.
Mark Zuckerberg’s apparent admission of responsibility for content posted on Facebook could be a watershed moment where a Silicon Valley company shifted from completely resisting the needed change to actively supporting it and taking part in an intelligent process to evolve the law reasonably.
Comments