In my book on Twitter (the first academic book on the subject), I make the argument that social media  such as Twitter can serve as a ‘public sphere’ where major public debates as well as quite intimate issues (like a cancer diagnosis or intimate partner violence (IPV)) can be discussed. Social media platforms can also change minds. Perhaps the easiest way to reflect on this is through more extreme speech. Take the case of the January 6th, 2021 U.S. Capitol insurrection, where journalists and academics (including myself) found that posts on Parler and other social media were  critical to fomenting the insurrection.

 

Moreover, what one sees on social media feeds and timelines as venting or even raging may be seen by another in a completely different light. Of course, plenty of content on social media is dismissed, ignored, skimmed, or disregarded in some way, a single tweet can actually be quite powerful (particularly if it becomes viral/spread by influencers) and, historically, there have been plenty of cases where a single tweet changes opinions. Unfortunately, this can have negative effects such as a single tweet convincing someone to not get a vaccination. Ultimately, social media platforms such as Twitter serve public sphere functions and can influence the ways in which both individuals and groups side on pressing issues (e.g., in the US (school shootings, mask mandates, and abortion laws) and in India (the 2020-2021 farmers’ protests and vaccine mandates).

As of July 2020, TikTok has about 80-100 million users in the US as reports do vary on this. TikTok users are also ‘growing up’ according to Adweek as older American users have started using the platform. According to AdWeek, the percentage of U.S.-based TikTok users age 18-24 fell from 41.1% in January to 35.3% in April. During that same time, the 25- to 34-year-old users rose from 22.4% to 27.4%, and users 35-44 grew from 13.9% to 17.1%. TikTok users have been vocal about their views about it being banned. Social media opinion (on Twitter and TikTok) are probably the best proxies we have for quickly assessing what everyday users as well as influencers think about the ban. First of all, it is clear that a of vocal number of TikTok users think the ban is partially (or even fully) in retaliation for how some influencers have been making fun of President Trump on the platform. Many tweets single out the viral TikTok influencer Sarah Cooper who has become famous worldwide for lip-synced parodies of President Trump as the reason for the ban. There has been a backlash on Twitter whereby users are saying because Trump thinks  Sarah Cooper is ‘mean’, he is banning TikTok to stop ‘mean girls’.

Many TikTok users have been deeply frustrated by the ban as the platform has become extremely important to their everyday self-expression. There are many post by users documenting how they never thought the videos would be watched by anyone, but they actually became extremely popular content creators. Others have made clear TikTok has been critical to their political activism, such as BlackLivesMatter; in my own work, I have been studying TikTok posts posted live from BlackLivesMatter protests. Ultimately, we need to also remember that TikTok, like YouTube, enables content creators to make money from the platform. And there are many whose majority income is from TikTok. These users are perhaps the most worried as their livelihoods are literally at state. They are the ones of the live streaming and trying to migrate followers to Instagram and YouTube to protect their incomes. Everyday users are clearly upset, but as many users are from younger demographics are already used to migrating from on social media platform to the other. Instagram’s reels product could be where a migration to occurs.

Banning Tik Tok is a data privacy issue, but is, of course, a deeply political decision as well. Data privacy is an important issue and there has been increased awareness in the US of the use of our data in unintended ways after Cambridge Analytica and the Zuckerberg public hearings. Asking a Chinese company to divest from a company collecting US personal data is not unusual in the US. But, the ban on TikTok is of course more than a data privacy argument. It is clearly political and part of the technology wars between the US and China that are bundled with the new Cold War between the countries. There is a geopolitical line between embassy closures and TikTok being banned. TikTok could be allowed to quietly divest from ByteDance TikTok’s parent company to a US company but this ban has clearly not been quiet. Indeed, Microsoft’s interest in TikTok  could have been approved by President Trump early on if data privacy was the central issue at play here. However, US media is  widely reporting that talks between Microsoft and ByteDance, TikTok’s parent company have been stalled as President Trump has said he won’t allow the acquisition.

I think that Twitter’s announced change to enable users to hide replies to tweets will likely allow users to have more control over their conversations on Twitter. In my research, I have seen far right groups post racial hate as replies to tweets of users from racial and ethnic minority groups. Allowing hide and mute functions could provide these types of users a level of content moderation control in terms of how their profile is being presented to the world. Of course, this does not abrogate Twitter and other social media platforms from actively looking for content that needs to be moderated and part of the broader movements towards platform accountability have put pressure on social media platforms to take charge and make an effort to do their part in terms of promoting healthier online communicative spaces. I think giving increased options to users to be able to moderate what sort of content are publicly displayed in their profiles is important and can be contextualized within larger privacy debates around social media platforms.

There is always a potential for features that provide levels of user-level moderation and editorial control to skew how users are being portrayed. Since its founding, Twitter has been seen as a very open, sometimes ‘Wild West’ space of the mainstream Internet. The good, the bad, and the ugly are easily seen in viral hashtags and reply chains to prominent ‘verified’ users.

Part of the attraction of Twitter, and something I note in my book Twitter: Social Communication in the Twitter Age, is that users have always liked the function of being able to tweet at whoever they want and to be able to see those replies there. There is, of course, some empowerment in this ability to interact with anyone on the platform. If Tweet streams are being curated, this can also present a particular side of the reception of their tweets.

However, just because replies are hidden or @-mentions blocked does not mean that users can not call out the inaccuracy or injustice in a tweet. Tweets can still be mentioned within other tweets and I have not heard plans to change this functionality.

Of course, any moderation has its challenges, but many users have faced significant issues in terms of abusive content in replies and mentions and creating a seamless experience towards being able to moderate the sorts of content that are associated with your tweet will be seen as a really welcome change for some users. I think it is really easy to focus on the fact that curation can present a particular reception of a tweet, but we often forget some of the racist, homophobic, misogynistic, and other extremely hateful content that can appear in replies to tweets and @-mentions. Creating a balance is clearly going to be challenging for Twitter. I am sure that the agents of certain verified users such as some politicians and celebrities will use this feature to maintain a particular image within reply streams that fits their client’s brand or image. But, on the other hand, celebrities that are ethnic and racial minorities can use this functionality to manage racist trolls.

In terms of Twitter’s initiatives towards monitoring its own ‘health’, I think changes such as this are part of Twitter trying to be a more accountable social media platform and taking notice that, again as my book Twitter argues, the platform has become an everyday form of communication in terms of social, political, and economic issues. As such, it is crucially important that the platform is accountable for fake news, bots, and extreme content that affects the health of Twitter. For example, Twitter’s move to ban political ads likely partially comes from a feeling that such ads were creating unhealthy levels of targeting and polarization.

Ultimately, I think content moderation, both in terms of machine-based algorithms as well as human-based methods are something that Twitter needs to continue to invest in to be able to monitor content given that it has become part of everyday conversation as well as profound conversation. Many turn to Twitter as their first port of call for the public broadcasting of their opinions, given the much stronger privacy controls of platforms such as Facebook that limit who you can engage with. However, these opinions can lead to bullying and have real impacts on the lives of users (including bullying leading to suicides). Giving users methods that are akin to privacy settings on platforms such as Facebook may provide some users much greater levels of control over their experience on Twitter and be a game changer for these users.

As digital technologies become more pervasive in our everyday lives, some individuals, communities, and groups are taking active measures towards digital silence. The argument of information overloading has been particularly charged in terms of debates around youth, which has sometimes led to moral panics around the topic.

Social media platforms are, by design, formulated to increase volume, rather than fostering silence. Moreover, the economics of social platforms are built around augmenting engagement, rather than silencing it. So-called ‘cell phone addiction’ has been studied for years now and such studies generally feature respondents who report positive aspects of withdrawing from their phones for some period.

The logic is not dissimilar to moves such as #MeatlessMonday, but, instead of providing a temporary break from meat, it’s from ‘normal’ digital usage. However, as my experiences with my students show, this is not a straightforward task. A decade ago, I used to ask students in a seminar class of mine, In The Facebook Age at Bowdoin College, to see how long they could go without a digital device and then to write about their experiences. Some students really enjoyed the experience and others felt it had major repercussions on their social lives.

Today, the pervasiveness of digital technologies into our lives is much more than it was in 2009 with everything from payments to health applications residing in social media and smartphones. Our phones sometimes literally even unlock doors. Therefore, even if iOS alerts shame us of excessive screen time, we generally quickly ignore them. At least we are losing our keys less often…

I recently got asked whether videos of everyday racism posted on social media be seen as a form of activism. From my perspective, the recording of videos by non-celebrities of everyday racism is definitely a form of activism as it allows individuals who may have not had networks or the means to express or convey experiences of everyday racism to large publics, especially global ones. In this way, these technologies allow individuals to have an audience which was far more difficult, though not impossible, pre-social media to achieve. Also, having a mobile phone with you at all times, gives you recording equipment both in terms of video and audio that are extremely powerful and were not at people’s fingertips before the advent of smartphones. Our ability to be able to record acts of everyday racism was more limited in the sense that one had to have a camera or other recording technology on them. We sometimes forget that having recording technology with us at all times is a very recent shift. This is not to say that forms of reporting racism did not occur in the past. Indeed, one of the most memorable aspects of being able to catch out racism and be able to disseminate this worldwide was the infamous recording of Rodney King in Los Angeles in 1991. This points to a notion termed ‘sousveillance’, which is the idea of the people who are normally surveilled by authorities being able to surveil authorities. In other words, a flip of who is surveilling whom. This is what happened with Rodney King and what happens with Americans today who, for example, turn on Facebook Live or other technologies when they are pulled over by the police and may encounter racism in their everyday life with authority figures. To use these technologies to surveil the authorities may also inhibit some forms of racism beyond the incident at hand. The pulling out of one’s phone in the face of an authority figure is therefore a form of activism that has broader implications of saying to authority figures that they can be subject to being surveilled in unexpected ways.

Given my work in race and racism, I do think forms of sousveillance can help expose white supremacy/racism and challenge racial inequality. I am optimistic that forms of sousveillance that involve everyday technologies by everyday people can have real impacts and can make a difference in terms of combating racial inequalities. Everyday forms of racism in the U.S. committed by authority figures from the police to government officials can be called out via videos on social media. The concept of sousveillance is all about people that have authority and those that do not and really changing the relationship and flipping it upside down. If those historically marginalized feel they are able to document incidents of everyday racism and then find an audience online whether that is on YouTube on Facebook or elsewhere can also be empowering. If their images are circulated on Instagram, Twitter, or WeChat, there is a tremendous potential not only for local circulation within a community but also wider international circulation and this can also translate to visibility by journalists who are able to then highlight these incidents. This can bring racist incidents to public attention and hold those responsible to task and can have real effects in combating white supremacy and other forms of racism.

The question of enhancing diversity in big data and computational social science is fundamentally important. I think the importance of diversity in computational areas is often ignored and, as scholars, this is at our peril.

First of all, we often have biases in how we interpret data. Specifically, bias due to particular subject positions (e.g. a researcher’s position coming from a dominant group). This often marginalizes minority voices in big data projects or just ‘others’ them.

Second, how data is analyzed, even if it is aggregated or difficult to discern identities, can be biased by subject positions. For example, we treat social media data as able to tell stories, but researchers often are not looking for diverse stories or diverse research questions. So, diverse stories need to actively be looked for.

Third, we have a general lack of diversity in terms of the types of data sets we generally collect and APIs do not easily facilitate efforts to showcase underrepresented groups. In social media, data collected is often based around a particular hashtag or categories, which may not represent racial/ethnic or other diversity well. Again, racial/ethnic, gender, socioeconomic and other diversity needs to be actively worked on.

Ultimately, it is important to understand that there is a lack of diversity in these areas and it is also critical for students and faculty from diverse areas to have literacy in big data and computational research methods.

As recent government hearings in the US and Europe around data privacy have underscored, there are deep consequences to the types of data being circulated. Moreover, the decisions that algorithms make tend to be based on what privileged people (e.g. the software developers designing the algorithms) see the world as. This often is to the detriment of diverse views (which are often seen as threatening).

After 11 years of 140-character tweets, Twitter decided to double this to 280 characters from November 2017.

Before rolling the change out to the general public, Twitter began trialing this “feature” with a select group of users (Watson 2017), though initial testing suggested that only 5 per cent of the group opted to use over 140 characters in their tweets (Newton 2017). And critics (e.g., Silver 2017) argue that this will drown out Twitter timelines, compromising the platform’s uniquely succinct form of social communication.

Our contemporary use of Twitter – in part a social, political, and economic information network – has evolved over more than a decade. So it may be some years before the impact of the 280-character expansion can be evaluated. Given that our behaviors on all social media platforms are interlinked, it may be that Twitter is answering a call for individuals to express themselves more fully, though in the context of these platforms more broadly, 280 characters is still relatively terse.

Ultimately, if Twitter continues to be viewed as a platform for bursts of communication rather than in-depth, fully formed dialogue, it is likely that the increased character count will not have a substantive change to the platform. Only time will tell if the increased character count impacts significantly on people’s use of Twitter.

Social media is heavily influenced by algorithms. For example, the Facebook feed algorithm, from what we know about it, is based on what you and your friends are liking, posting, and doing on the platform (and perhaps even ‘people like you’ that Facebook is data mining). Many social media algorithms are designed around homophily. And algorithms theoretically are value neutral. If someone consumes and produces criminal content, the algorithm will try to be helpful and guide the user to relevant criminal content. The algorithms are just following what they are programmed to do.  algorithms can equally encourage content around positive civic responsibility, if a user has displayed a preference in that direction.

To be critical about algorithms, we do have acknowledge the advantages and disadvantages of algorithm proliferation. For example, some algorithms are designed for safeguarding and this can be a real positive. There might be algorithmically-based filters for Internet searching or video delivery specific to kids for example. If a child has a profile on Netflix which is specifically set to Netflix’s child setting, then by the algorithm’s definition, they are not supposed to receive content that is age inappropriate. This tends to work in practice. Though, if content is inappropriately categorized, the algorithm would of course just follow its rule-based rubric instructions and would guide kids to inappropriate content as well. So humans are very much part of this process and if errors occur, then not having humans in the loop can partially be attributable to some of the issues of whether algorithms break down in these instances.

Ultimately, the algorithms driving social media are what are called ‘black box algorithms’. These can be defined as algorithms that are generally proprietary, and which are open-source. The algorithm is meant to be private in terms of its design and operation and documentation is not made publicly available, nor is data made available in terms of the decisions made by the algorithm. In this way, black box algorithms are also similar in that we can only infer particular aspects of the algorithm based on observing the algorithm’s behavior ‘in the wild’.

At cafes, bus stops, and other public places, I hear people lament about what they think is a rude behavior and that there are no manners in an age of social media. Etiquette has been enormously important to societies historically and not always for the right reasons (like marginalizing individuals or groups of people – especially ethnic and racial minorities). The sociologist Norbert Elias spent much of his book The civilizing process investigating etiquette and argued that the development of etiquette is part of a historical ‘process’. Using the ‘right’ fork was not a random thing. So too are practices of etiquette on social media.

 

Etiquette is a reflection of social norms, class, and other demographic factors. As such, it can divide people or create hierarchies. Some social processes were more elite in the past (like diary writing or eating in a restaurant). As social activities become more democratic, practices of etiquette can do so as well. Think of it this way: if someone drops food on the floor during a meal at home and then picks it up and eats it, only those there know about it; but if such behaviors are in tweets, Instagram photos, etc., the action has a much wider audience. For some, the five-second rule applies and for others, such behavior is repugnant. Again, these responses have a lot to do with our social, cultural, and economic background.

 

Thinking about etiquette can be relevant to understandings of social media production and consumption. Ultimately, etiquette is socially constructed and what is considered normal and acceptable varies based on a variety of socioeconomic and cultural factors. And these normative constructions can shape social media habits – from deciding whether using social media apps during a date is acceptable to what types of positive and negative things we say about friends, family, and colleagues. Even whether we post a video of our kid comically falling down can partly be influenced by questions of etiquette. In addition, the acceptability of creeping or fake ‘catfish’ Facebook profiles also partly depends on social media etiquette. Certain perceived notions of what one is ‘expected’ to do in a situation or what is civilized – the latter drawing from eating etiquette for example – are important to reflect on. Though, like in history, etiquette is a contested space and has politics of inclusion and exclusion.

 

Ultimately, social media has made social interactions more public – what has traditionally been private has become increasingly public. In addition, social media lets us interact with much larger audiences. Neither is inherently good or bad, but a part of changes in social communication. I also do not think we have become less polite. Rather, the venom and vitriol we have always had throughout time and is very much one part of human nature now has a very public audience that we did not have before. Or, in other words, social media is not making us into this or that; we were already there…

SMAO15_top

Social media, Activism, and Organizations 2015 (#SMAO15) brought together a large, international audience from a variety of disciplines. As founding chair, I was thrilled to receive such a strong set of submissions, which made for an exciting day of talks. #SMAO15 was fortunate to have plenary talks by Jen SchradiePaul LevyAlison Powell, Natalie Fenton, and David Karpf. Our keynote talk was by Jennifer Earl. It was a pleasure to also host the launch of Veronica Barassi’s book, Activism on the Web.

Papers at #SMAO15 not only advanced our understandings of contemporary uses of social media use in social movements, but also the many ways in which we can critically reflect on their potential and limitations. Importantly, organizational communication perspectives were central to #SMAO15 and the contributions made give scholars in this interdisciplinary area much to think about. Questions around social media in formal and informal movement organizations were explored from a variety of methodological perspectives, including social network analysis, ethnography, surveys, participant observation and big data analytics. Papers covered regional case studies in Europe, North America, Africa, the Middle East, and Asia. In addition, #SMAO15 brought together practitioners, artists, and scholars. This unique constituency of attendees yielded new perspectives and approaches to studying social media, activism, and organizations.

twitter network_sna

Over 1000 tweets were posted during the event and you can virtually (re)attend via the #smao15 Twitter archive or network visualization/analytics. As you can see from the network graph above, #smao15 tweets exhibit a focused density.