On Tuesday Twitter announced yet another crackdown on abusers. With the goal of making Twitter a safer place, it has come up with new ways to
- Prevent the creation of new abusive accounts;
- Make search safer; and
- Collapse potentially abusive or low-quality tweets.
Twitter also pledged to persist in its anti-abuse endeavors, saying it would keep rolling out product changes, some more visible than others, and updating users on its progress every step of the way. “People use it for news and for access to quick gossip,” he told TechNewsWorld, adding that its open-ended structure makes it an easier target for abuse.
Twitter will identify account owners it has suspended permanently and block them from creating new accounts. That might be a reaction to the creation of multiple fake accounts last fall, after Twitter had suspended several accounts linked to the alt-right movement, which is known for advocating white supremacy and other extreme views.
Those suspensions came amid mounting criticism of the company’s failure to expunge harassing, racist, sexist and anti-Semitic tweets from its network.
Safe search involves filtering tweets that contain potentially sensitive content, as well as tweets from blocked and muted accounts, from search results. However, users would have other ways to search for and access those tweets. Under the new system, potentially abusive and low-quality replies will be collapsed, although they will be available if users want to seek them out. This change will roll out in the coming weeks, Twitter said.
Protection or Cybergagging?
“As soon as you introduce subjectivity into regulating Twitter, it loses its appeal,” he told TechNewsWorld. “One person’s freedom of speech is another person’s microaggression. Twitter’s best bet is to say, ‘Abandon all hope ye who enter here.'”
Getting around the problem of subjective judgment will be difficult, McGregor suggested. “How do you decide what’s appropriate or abusive, and what’s not? You need to have a context for the conversation and the relationship.” Friends would couch statements in terms that might be considered inappropriate when relayed to a stranger, he pointed out. “For example, I could tweet the word ‘s**t’ to a friend in response to something he’d said or a news item we were discussing, and it would be all right.”
Using artificial intelligence to filter out potentially offending tweets isn’t going to resolve the issue, because “AI systems have to learn like humans do, and no AI solution will really work unless you have a finite number of inputs,” McGregor pointed out.
Twitter’s Battle Against the Abusers
Twitter in 2014 suspended several accounts for violating its rules after actor Robin Williams’ daughter Zelda publicly quit the site due to hateful tweets about her father’s tragic suicide. She later reactivated her account
Another victim, Imani Gandy, had been harassed since 2012 by someone with the handle “Assholster,” who created up to 10 different Twitter accounts a day to hurl racist invectives at her.