I think that Twitter’s announced change to enable users to hide replies to tweets will likely allow users to have more control over their conversations on Twitter. In my research, I have seen far right groups post racial hate as replies to tweets of users from racial and ethnic minority groups. Allowing hide and mute functions could provide these types of users a level of content moderation control in terms of how their profile is being presented to the world. Of course, this does not abrogate Twitter and other social media platforms from actively looking for content that needs to be moderated and part of the broader movements towards platform accountability have put pressure on social media platforms to take charge and make an effort to do their part in terms of promoting healthier online communicative spaces. I think giving increased options to users to be able to moderate what sort of content are publicly displayed in their profiles is important and can be contextualized within larger privacy debates around social media platforms.

There is always a potential for features that provide levels of user-level moderation and editorial control to skew how users are being portrayed. Since its founding, Twitter has been seen as a very open, sometimes ‘Wild West’ space of the mainstream Internet. The good, the bad, and the ugly are easily seen in viral hashtags and reply chains to prominent ‘verified’ users.

Part of the attraction of Twitter, and something I note in my book Twitter: Social Communication in the Twitter Age, is that users have always liked the function of being able to tweet at whoever they want and to be able to see those replies there. There is, of course, some empowerment in this ability to interact with anyone on the platform. If Tweet streams are being curated, this can also present a particular side of the reception of their tweets.

However, just because replies are hidden or @-mentions blocked does not mean that users can not call out the inaccuracy or injustice in a tweet. Tweets can still be mentioned within other tweets and I have not heard plans to change this functionality.

Of course, any moderation has its challenges, but many users have faced significant issues in terms of abusive content in replies and mentions and creating a seamless experience towards being able to moderate the sorts of content that are associated with your tweet will be seen as a really welcome change for some users. I think it is really easy to focus on the fact that curation can present a particular reception of a tweet, but we often forget some of the racist, homophobic, misogynistic, and other extremely hateful content that can appear in replies to tweets and @-mentions. Creating a balance is clearly going to be challenging for Twitter. I am sure that the agents of certain verified users such as some politicians and celebrities will use this feature to maintain a particular image within reply streams that fits their client’s brand or image. But, on the other hand, celebrities that are ethnic and racial minorities can use this functionality to manage racist trolls.

In terms of Twitter’s initiatives towards monitoring its own ‘health’, I think changes such as this are part of Twitter trying to be a more accountable social media platform and taking notice that, again as my book Twitter argues, the platform has become an everyday form of communication in terms of social, political, and economic issues. As such, it is crucially important that the platform is accountable for fake news, bots, and extreme content that affects the health of Twitter. For example, Twitter’s move to ban political ads likely partially comes from a feeling that such ads were creating unhealthy levels of targeting and polarization.

Ultimately, I think content moderation, both in terms of machine-based algorithms as well as human-based methods are something that Twitter needs to continue to invest in to be able to monitor content given that it has become part of everyday conversation as well as profound conversation. Many turn to Twitter as their first port of call for the public broadcasting of their opinions, given the much stronger privacy controls of platforms such as Facebook that limit who you can engage with. However, these opinions can lead to bullying and have real impacts on the lives of users (including bullying leading to suicides). Giving users methods that are akin to privacy settings on platforms such as Facebook may provide some users much greater levels of control over their experience on Twitter and be a game changer for these users.

As digital technologies become more pervasive in our everyday lives, some individuals, communities, and groups are taking active measures towards digital silence. The argument of information overloading has been particularly charged in terms of debates around youth, which has sometimes led to moral panics around the topic.

Social media platforms are, by design, formulated to increase volume, rather than fostering silence. Moreover, the economics of social platforms are built around augmenting engagement, rather than silencing it. So-called ‘cell phone addiction’ has been studied for years now and such studies generally feature respondents who report positive aspects of withdrawing from their phones for some period.

The logic is not dissimilar to moves such as #MeatlessMonday, but, instead of providing a temporary break from meat, it’s from ‘normal’ digital usage. However, as my experiences with my students show, this is not a straightforward task. A decade ago, I used to ask students in a seminar class of mine, In The Facebook Age at Bowdoin College, to see how long they could go without a digital device and then to write about their experiences. Some students really enjoyed the experience and others felt it had major repercussions on their social lives.

Today, the pervasiveness of digital technologies into our lives is much more than it was in 2009 with everything from payments to health applications residing in social media and smartphones. Our phones sometimes literally even unlock doors. Therefore, even if iOS alerts shame us of excessive screen time, we generally quickly ignore them. At least we are losing our keys less often…

I recently got asked whether videos of everyday racism posted on social media be seen as a form of activism. From my perspective, the recording of videos by non-celebrities of everyday racism is definitely a form of activism as it allows individuals who may have not had networks or the means to express or convey experiences of everyday racism to large publics, especially global ones. In this way, these technologies allow individuals to have an audience which was far more difficult, though not impossible, pre-social media to achieve. Also, having a mobile phone with you at all times, gives you recording equipment both in terms of video and audio that are extremely powerful and were not at people’s fingertips before the advent of smartphones. Our ability to be able to record acts of everyday racism was more limited in the sense that one had to have a camera or other recording technology on them. We sometimes forget that having recording technology with us at all times is a very recent shift. This is not to say that forms of reporting racism did not occur in the past. Indeed, one of the most memorable aspects of being able to catch out racism and be able to disseminate this worldwide was the infamous recording of Rodney King in Los Angeles in 1991. This points to a notion termed ‘sousveillance’, which is the idea of the people who are normally surveilled by authorities being able to surveil authorities. In other words, a flip of who is surveilling whom. This is what happened with Rodney King and what happens with Americans today who, for example, turn on Facebook Live or other technologies when they are pulled over by the police and may encounter racism in their everyday life with authority figures. To use these technologies to surveil the authorities may also inhibit some forms of racism beyond the incident at hand. The pulling out of one’s phone in the face of an authority figure is therefore a form of activism that has broader implications of saying to authority figures that they can be subject to being surveilled in unexpected ways.

Given my work in race and racism, I do think forms of sousveillance can help expose white supremacy/racism and challenge racial inequality. I am optimistic that forms of sousveillance that involve everyday technologies by everyday people can have real impacts and can make a difference in terms of combating racial inequalities. Everyday forms of racism in the U.S. committed by authority figures from the police to government officials can be called out via videos on social media. The concept of sousveillance is all about people that have authority and those that do not and really changing the relationship and flipping it upside down. If those historically marginalized feel they are able to document incidents of everyday racism and then find an audience online whether that is on YouTube on Facebook or elsewhere can also be empowering. If their images are circulated on Instagram, Twitter, or WeChat, there is a tremendous potential not only for local circulation within a community but also wider international circulation and this can also translate to visibility by journalists who are able to then highlight these incidents. This can bring racist incidents to public attention and hold those responsible to task and can have real effects in combating white supremacy and other forms of racism.

The question of enhancing diversity in big data and computational social science is fundamentally important. I think the importance of diversity in computational areas is often ignored and, as scholars, this is at our peril.

First of all, we often have biases in how we interpret data. Specifically, bias due to particular subject positions (e.g. a researcher’s position coming from a dominant group). This often marginalizes minority voices in big data projects or just ‘others’ them.

Second, how data is analyzed, even if it is aggregated or difficult to discern identities, can be biased by subject positions. For example, we treat social media data as able to tell stories, but researchers often are not looking for diverse stories or diverse research questions. So, diverse stories need to actively be looked for.

Third, we have a general lack of diversity in terms of the types of data sets we generally collect and APIs do not easily facilitate efforts to showcase underrepresented groups. In social media, data collected is often based around a particular hashtag or categories, which may not represent racial/ethnic or other diversity well. Again, racial/ethnic, gender, socioeconomic and other diversity needs to be actively worked on.

Ultimately, it is important to understand that there is a lack of diversity in these areas and it is also critical for students and faculty from diverse areas to have literacy in big data and computational research methods.

As recent government hearings in the US and Europe around data privacy have underscored, there are deep consequences to the types of data being circulated. Moreover, the decisions that algorithms make tend to be based on what privileged people (e.g. the software developers designing the algorithms) see the world as. This often is to the detriment of diverse views (which are often seen as threatening).

After 11 years of 140-character tweets, Twitter decided to double this to 280 characters from November 2017.

Before rolling the change out to the general public, Twitter began trialing this “feature” with a select group of users (Watson 2017), though initial testing suggested that only 5 per cent of the group opted to use over 140 characters in their tweets (Newton 2017). And critics (e.g., Silver 2017) argue that this will drown out Twitter timelines, compromising the platform’s uniquely succinct form of social communication.

Our contemporary use of Twitter – in part a social, political, and economic information network – has evolved over more than a decade. So it may be some years before the impact of the 280-character expansion can be evaluated. Given that our behaviors on all social media platforms are interlinked, it may be that Twitter is answering a call for individuals to express themselves more fully, though in the context of these platforms more broadly, 280 characters is still relatively terse.

Ultimately, if Twitter continues to be viewed as a platform for bursts of communication rather than in-depth, fully formed dialogue, it is likely that the increased character count will not have a substantive change to the platform. Only time will tell if the increased character count impacts significantly on people’s use of Twitter.

Social media is heavily influenced by algorithms. For example, the Facebook feed algorithm, from what we know about it, is based on what you and your friends are liking, posting, and doing on the platform (and perhaps even ‘people like you’ that Facebook is data mining). Many social media algorithms are designed around homophily. And algorithms theoretically are value neutral. If someone consumes and produces criminal content, the algorithm will try to be helpful and guide the user to relevant criminal content. The algorithms are just following what they are programmed to do.  algorithms can equally encourage content around positive civic responsibility, if a user has displayed a preference in that direction.

To be critical about algorithms, we do have acknowledge the advantages and disadvantages of algorithm proliferation. For example, some algorithms are designed for safeguarding and this can be a real positive. There might be algorithmically-based filters for Internet searching or video delivery specific to kids for example. If a child has a profile on Netflix which is specifically set to Netflix’s child setting, then by the algorithm’s definition, they are not supposed to receive content that is age inappropriate. This tends to work in practice. Though, if content is inappropriately categorized, the algorithm would of course just follow its rule-based rubric instructions and would guide kids to inappropriate content as well. So humans are very much part of this process and if errors occur, then not having humans in the loop can partially be attributable to some of the issues of whether algorithms break down in these instances.

Ultimately, the algorithms driving social media are what are called ‘black box algorithms’. These can be defined as algorithms that are generally proprietary, and which are open-source. The algorithm is meant to be private in terms of its design and operation and documentation is not made publicly available, nor is data made available in terms of the decisions made by the algorithm. In this way, black box algorithms are also similar in that we can only infer particular aspects of the algorithm based on observing the algorithm’s behavior ‘in the wild’.

At cafes, bus stops, and other public places, I hear people lament about what they think is a rude behavior and that there are no manners in an age of social media. Etiquette has been enormously important to societies historically and not always for the right reasons (like marginalizing individuals or groups of people – especially ethnic and racial minorities). The sociologist Norbert Elias spent much of his book The civilizing process investigating etiquette and argued that the development of etiquette is part of a historical ‘process’. Using the ‘right’ fork was not a random thing. So too are practices of etiquette on social media.

 

Etiquette is a reflection of social norms, class, and other demographic factors. As such, it can divide people or create hierarchies. Some social processes were more elite in the past (like diary writing or eating in a restaurant). As social activities become more democratic, practices of etiquette can do so as well. Think of it this way: if someone drops food on the floor during a meal at home and then picks it up and eats it, only those there know about it; but if such behaviors are in tweets, Instagram photos, etc., the action has a much wider audience. For some, the five-second rule applies and for others, such behavior is repugnant. Again, these responses have a lot to do with our social, cultural, and economic background.

 

Thinking about etiquette can be relevant to understandings of social media production and consumption. Ultimately, etiquette is socially constructed and what is considered normal and acceptable varies based on a variety of socioeconomic and cultural factors. And these normative constructions can shape social media habits – from deciding whether using social media apps during a date is acceptable to what types of positive and negative things we say about friends, family, and colleagues. Even whether we post a video of our kid comically falling down can partly be influenced by questions of etiquette. In addition, the acceptability of creeping or fake ‘catfish’ Facebook profiles also partly depends on social media etiquette. Certain perceived notions of what one is ‘expected’ to do in a situation or what is civilized – the latter drawing from eating etiquette for example – are important to reflect on. Though, like in history, etiquette is a contested space and has politics of inclusion and exclusion.

 

Ultimately, social media has made social interactions more public – what has traditionally been private has become increasingly public. In addition, social media lets us interact with much larger audiences. Neither is inherently good or bad, but a part of changes in social communication. I also do not think we have become less polite. Rather, the venom and vitriol we have always had throughout time and is very much one part of human nature now has a very public audience that we did not have before. Or, in other words, social media is not making us into this or that; we were already there…

SMAO15_top

Social media, Activism, and Organizations 2015 (#SMAO15) brought together a large, international audience from a variety of disciplines. As founding chair, I was thrilled to receive such a strong set of submissions, which made for an exciting day of talks. #SMAO15 was fortunate to have plenary talks by Jen SchradiePaul LevyAlison Powell, Natalie Fenton, and David Karpf. Our keynote talk was by Jennifer Earl. It was a pleasure to also host the launch of Veronica Barassi’s book, Activism on the Web.

Papers at #SMAO15 not only advanced our understandings of contemporary uses of social media use in social movements, but also the many ways in which we can critically reflect on their potential and limitations. Importantly, organizational communication perspectives were central to #SMAO15 and the contributions made give scholars in this interdisciplinary area much to think about. Questions around social media in formal and informal movement organizations were explored from a variety of methodological perspectives, including social network analysis, ethnography, surveys, participant observation and big data analytics. Papers covered regional case studies in Europe, North America, Africa, the Middle East, and Asia. In addition, #SMAO15 brought together practitioners, artists, and scholars. This unique constituency of attendees yielded new perspectives and approaches to studying social media, activism, and organizations.

twitter network_sna

Over 1000 tweets were posted during the event and you can virtually (re)attend via the #smao15 Twitter archive or network visualization/analytics. As you can see from the network graph above, #smao15 tweets exhibit a focused density.

The question of whether social media are addictive is becoming asked more and more. These discourses are generally framed by observations that people seem to ‘always’ be on social media. The addiction card is particularly used in reference to young people. Though the negotiation by young people of social and mobile media spaces presents many challenges, addiction to social media is like most addictions an exception rather than a rule. On the one hand, there are parents who worry and want to exert more control or protection of their kids’ use of mediated communication. However, paternalism and protectionism pose real negative effects for teens as these behaviors can inhibit their engagement with the world (boyd, 2014) − not only with co-located peers, but people, groups and ideas near or far to them. That being said, coming of age in a world of smartphones and social media does present many challenges.

We need to be careful with employing the language of addiction. Specifically, has the person built up ‘tolerance’ to social media wherein they require increasing exposure to it and have social media become the most important aspects of a person’s life (what psychologists call salience’) (Griffiths, 2000)? Are their physical ’withdrawal’ symptoms? Clearly, very few young people fall into social media addiction and I find that invocations of addiction often obscure the social media debate.

For most young people, social media provide new ways of information seeking, new forms of sociability and generally augment their social lives (DeAndrea, Ellison, LaRose, Steinfield, & Fiore, 2012). There are specific ‘uses and gratifications’ (Rubin, 2002) that social media afford well and moral panics around social media addiction reduce the complexity of social media use to statistics of smartphone ownership or hours of screen time. Remember, young people have often been early adopters or ‘obsessive’ about technology − from the advent of the telephone to the Walkman. That hardly makes them addicted. It just makes them young people who are drawn to technologies they perceive as cool. And parents then complained about overuse of these technologies and some previously Walkman-toting parents may not have fallen far from the tree.

Indeed, they may be walking down the street listening to music and emailing colleagues at work. But, that’s just extreme productivity, right? Young people seem to be painted with a different brush in moral panic portraits. And these artists might want to be more reflexive when it comes to their own technology use.

 

References:

boyd, Danah. (2014). It’s complicated: The social lives of networked teens: Yale University Press.

DeAndrea, David C, Ellison, Nicole B, LaRose, Robert, Steinfield, Charles, & Fiore, Andrew. (2012). Serious social media: On the use of social media for improving students’ adjustment to college. The Internet and Higher Education, 15(1), 15-23.

Griffiths, Mark. (2000). Does Internet and computer” addiction” exist? Some case study evidence. CyberPsychology and Behavior, 3(2), 211-218.

Rubin, Alan M. (2002). The uses-and-gratifications perspective of media effects.

 

A version of this post has been published with The Huffington Post

Social media is all about user generated content (UGC). We often forget that social media companies are multibillion-dollar industries that are ultimately dependent on the content we – every day people -create. Their cyberinfrastructure is not enough by itself to hold inherent value for a social media platform. Rather, the value is maintained and grown via larger and larger amounts of user generated content. Indeed, what is often overlooked is that it is not just the increased volume of content, but it is also that we are increasingly tagging and associating this content, which makes for a certain level of stickiness (Jenkins et al.) within social media platforms. Social media platforms themselves are built to elicit content. For example, emails from Facebook saying so-and-so like this or posted this seduce/tantalize us to produce and consume content on Facebook. But, from the vantage point of social media companies and potentially us, sharing should be seamless – indeed it should almost feel ‘natural’. The notion of frictionless sharing is based on a difficult ontology that places primacy on the ‘shared self” rather than a private self. The shared self sees a social value in sharing. Some see the social function of sharing as akin to forms of social grooming when we share with others, but we also want social feedback in return. However, a perceived issue with social media is that not all of your photos, post, etc. will receive a response so the perception is that the more content one produces and shares, the more one is likely to get a social response (e.g. a like, comment, etc.).

So I ask you readers:

  • What exactly can we ‘frictionlessly’ share? [Feel free to provide examples and links]
  • How do these affect notions of the public and private?
  • Do these technologies make us more pro-social or do they inhibit sharing?