I think that Twitter’s announced change to enable users to hide replies to tweets will likely allow users to have more control over their conversations on Twitter. In my research, I have seen far right groups post racial hate as replies to tweets of users from racial and ethnic minority groups. Allowing hide and mute functions could provide these types of users a level of content moderation control in terms of how their profile is being presented to the world. Of course, this does not abrogate Twitter and other social media platforms from actively looking for content that needs to be moderated and part of the broader movements towards platform accountability have put pressure on social media platforms to take charge and make an effort to do their part in terms of promoting healthier online communicative spaces. I think giving increased options to users to be able to moderate what sort of content are publicly displayed in their profiles is important and can be contextualized within larger privacy debates around social media platforms.
There is always a potential for features that provide levels of user-level moderation and editorial control to skew how users are being portrayed. Since its founding, Twitter has been seen as a very open, sometimes ‘Wild West’ space of the mainstream Internet. The good, the bad, and the ugly are easily seen in viral hashtags and reply chains to prominent ‘verified’ users.
Part of the attraction of Twitter, and something I note in my book Twitter: Social Communication in the Twitter Age, is that users have always liked the function of being able to tweet at whoever they want and to be able to see those replies there. There is, of course, some empowerment in this ability to interact with anyone on the platform. If Tweet streams are being curated, this can also present a particular side of the reception of their tweets.
However, just because replies are hidden or @-mentions blocked does not mean that users can not call out the inaccuracy or injustice in a tweet. Tweets can still be mentioned within other tweets and I have not heard plans to change this functionality.
Of course, any moderation has its challenges, but many users have faced significant issues in terms of abusive content in replies and mentions and creating a seamless experience towards being able to moderate the sorts of content that are associated with your tweet will be seen as a really welcome change for some users. I think it is really easy to focus on the fact that curation can present a particular reception of a tweet, but we often forget some of the racist, homophobic, misogynistic, and other extremely hateful content that can appear in replies to tweets and @-mentions. Creating a balance is clearly going to be challenging for Twitter. I am sure that the agents of certain verified users such as some politicians and celebrities will use this feature to maintain a particular image within reply streams that fits their client’s brand or image. But, on the other hand, celebrities that are ethnic and racial minorities can use this functionality to manage racist trolls.
In terms of Twitter’s initiatives towards monitoring its own ‘health’, I think changes such as this are part of Twitter trying to be a more accountable social media platform and taking notice that, again as my book Twitter argues, the platform has become an everyday form of communication in terms of social, political, and economic issues. As such, it is crucially important that the platform is accountable for fake news, bots, and extreme content that affects the health of Twitter. For example, Twitter’s move to ban political ads likely partially comes from a feeling that such ads were creating unhealthy levels of targeting and polarization.
Ultimately, I think content moderation, both in terms of machine-based algorithms as well as human-based methods are something that Twitter needs to continue to invest in to be able to monitor content given that it has become part of everyday conversation as well as profound conversation. Many turn to Twitter as their first port of call for the public broadcasting of their opinions, given the much stronger privacy controls of platforms such as Facebook that limit who you can engage with. However, these opinions can lead to bullying and have real impacts on the lives of users (including bullying leading to suicides). Giving users methods that are akin to privacy settings on platforms such as Facebook may provide some users much greater levels of control over their experience on Twitter and be a game changer for these users.