Twitter and Cyberbullying – Can Contracts Help?
The always excellent Bob Sullivan recently wrote a post on his blog about the nastiness that resulted from Curt Schilling’s proud daddy post. (The nastiness is seriously nasty). As Sullivan points out, and which I’ve argued here and here, online companies like Twitter have a business responsibility to make sure the services they offer are safe. Online, of course, everything turns into a debate about free speech, even when the so-called speech is obviously obscenity (and please, don’t argue with me about this one – if you saw the tweets, you would agree that a “reasonable person” would think they were obscene) and even though the billion dollar companies are not state actors. The problem is that Section 230 of the Communications Decency Act has been construed very broadly by courts to protect websites like Twitter from liability for content posted by others. That gives these companies little incentive to invest resources into policing their sites. But as Sullivan notes, if they don’t start to clean up their sites, people might start leaving in droves.
So what should businesses do? One thing they could do is start taking their contracts seriously. We are all familiar with clickwrap and browsewrap agreements that nobody reads. They often contain codes of conduct or, in the case of Twitter, “content boundaries.” Companies can start making these agreements more readable and salient. They can start by actually enforcing them. For example, Twitter can enforce their content boundaries by kicking users off the site or charging them a fine for violating the rules (maybe after a warning) which may help defray the costs of policing the site….They can also design their contracts so they are more readable and post a “warning” that abusive tweets will be subject to a fine or suspension and force users to “click” to acknowledge they have read the warning. I suspect that many who tweet impulsively later regret it so a warning at point of posting/sending might make some think twice.
I realize that hiring people to evaluate each reported post might take time so that the best solution would be software that flags certain posts and sends a warning to the user to reconsider the post. It could also contain a reminder that the user will be liable for damages if the tweet is defamatory. All this scary stuff is in the contract – but because it is contained behind a hyperlink, few users will actually read it. An email delivered to the users, reminding them of their contractual obligations and the scary things (public condemnation, suspension or expulsion from the site, liability for lawsuits, maybe even criminal prosecution if the tweet is threatening enough) that might happen to them if they violate these obligations, might be more effective. Some users may voluntarily take down the post in response to the automated email which may cut down on the number of tweets subject to human review.
Of course, contracts can only do so much, but they might help.