In my book on Twitter (the first academic book on the subject), I make the argument that social media  such as Twitter can serve as a ‘public sphere’ where major public debates as well as quite intimate issues (like a cancer diagnosis or intimate partner violence (IPV)) can be discussed. Social media platforms can also change minds. Perhaps the easiest way to reflect on this is through more extreme speech. Take the case of the January 6th, 2021 U.S. Capitol insurrection, where journalists and academics (including myself) found that posts on Parler and other social media were  critical to fomenting the insurrection.


Moreover, what one sees on social media feeds and timelines as venting or even raging may be seen by another in a completely different light. Of course, plenty of content on social media is dismissed, ignored, skimmed, or disregarded in some way, a single tweet can actually be quite powerful (particularly if it becomes viral/spread by influencers) and, historically, there have been plenty of cases where a single tweet changes opinions. Unfortunately, this can have negative effects such as a single tweet convincing someone to not get a vaccination. Ultimately, social media platforms such as Twitter serve public sphere functions and can influence the ways in which both individuals and groups side on pressing issues (e.g., in the US (school shootings, mask mandates, and abortion laws) and in India (the 2020-2021 farmers’ protests and vaccine mandates).

I recently got asked whether videos of everyday racism posted on social media be seen as a form of activism. From my perspective, the recording of videos by non-celebrities of everyday racism is definitely a form of activism as it allows individuals who may have not had networks or the means to express or convey experiences of everyday racism to large publics, especially global ones. In this way, these technologies allow individuals to have an audience which was far more difficult, though not impossible, pre-social media to achieve. Also, having a mobile phone with you at all times, gives you recording equipment both in terms of video and audio that are extremely powerful and were not at people’s fingertips before the advent of smartphones. Our ability to be able to record acts of everyday racism was more limited in the sense that one had to have a camera or other recording technology on them. We sometimes forget that having recording technology with us at all times is a very recent shift. This is not to say that forms of reporting racism did not occur in the past. Indeed, one of the most memorable aspects of being able to catch out racism and be able to disseminate this worldwide was the infamous recording of Rodney King in Los Angeles in 1991. This points to a notion termed ‘sousveillance’, which is the idea of the people who are normally surveilled by authorities being able to surveil authorities. In other words, a flip of who is surveilling whom. This is what happened with Rodney King and what happens with Americans today who, for example, turn on Facebook Live or other technologies when they are pulled over by the police and may encounter racism in their everyday life with authority figures. To use these technologies to surveil the authorities may also inhibit some forms of racism beyond the incident at hand. The pulling out of one’s phone in the face of an authority figure is therefore a form of activism that has broader implications of saying to authority figures that they can be subject to being surveilled in unexpected ways.

Given my work in race and racism, I do think forms of sousveillance can help expose white supremacy/racism and challenge racial inequality. I am optimistic that forms of sousveillance that involve everyday technologies by everyday people can have real impacts and can make a difference in terms of combating racial inequalities. Everyday forms of racism in the U.S. committed by authority figures from the police to government officials can be called out via videos on social media. The concept of sousveillance is all about people that have authority and those that do not and really changing the relationship and flipping it upside down. If those historically marginalized feel they are able to document incidents of everyday racism and then find an audience online whether that is on YouTube on Facebook or elsewhere can also be empowering. If their images are circulated on Instagram, Twitter, or WeChat, there is a tremendous potential not only for local circulation within a community but also wider international circulation and this can also translate to visibility by journalists who are able to then highlight these incidents. This can bring racist incidents to public attention and hold those responsible to task and can have real effects in combating white supremacy and other forms of racism.


Social media, Activism, and Organizations 2015 (#SMAO15) brought together a large, international audience from a variety of disciplines. As founding chair, I was thrilled to receive such a strong set of submissions, which made for an exciting day of talks. #SMAO15 was fortunate to have plenary talks by Jen SchradiePaul LevyAlison Powell, Natalie Fenton, and David Karpf. Our keynote talk was by Jennifer Earl. It was a pleasure to also host the launch of Veronica Barassi’s book, Activism on the Web.

Papers at #SMAO15 not only advanced our understandings of contemporary uses of social media use in social movements, but also the many ways in which we can critically reflect on their potential and limitations. Importantly, organizational communication perspectives were central to #SMAO15 and the contributions made give scholars in this interdisciplinary area much to think about. Questions around social media in formal and informal movement organizations were explored from a variety of methodological perspectives, including social network analysis, ethnography, surveys, participant observation and big data analytics. Papers covered regional case studies in Europe, North America, Africa, the Middle East, and Asia. In addition, #SMAO15 brought together practitioners, artists, and scholars. This unique constituency of attendees yielded new perspectives and approaches to studying social media, activism, and organizations.

twitter network_sna

Over 1000 tweets were posted during the event and you can virtually (re)attend via the #smao15 Twitter archive or network visualization/analytics. As you can see from the network graph above, #smao15 tweets exhibit a focused density.

Social media is all about user generated content (UGC). We often forget that social media companies are multibillion-dollar industries that are ultimately dependent on the content we – every day people -create. Their cyberinfrastructure is not enough by itself to hold inherent value for a social media platform. Rather, the value is maintained and grown via larger and larger amounts of user generated content. Indeed, what is often overlooked is that it is not just the increased volume of content, but it is also that we are increasingly tagging and associating this content, which makes for a certain level of stickiness (Jenkins et al.) within social media platforms. Social media platforms themselves are built to elicit content. For example, emails from Facebook saying so-and-so like this or posted this seduce/tantalize us to produce and consume content on Facebook. But, from the vantage point of social media companies and potentially us, sharing should be seamless – indeed it should almost feel ‘natural’. The notion of frictionless sharing is based on a difficult ontology that places primacy on the ‘shared self” rather than a private self. The shared self sees a social value in sharing. Some see the social function of sharing as akin to forms of social grooming when we share with others, but we also want social feedback in return. However, a perceived issue with social media is that not all of your photos, post, etc. will receive a response so the perception is that the more content one produces and shares, the more one is likely to get a social response (e.g. a like, comment, etc.).

So I ask you readers:

  • What exactly can we ‘frictionlessly’ share? [Feel free to provide examples and links]
  • How do these affect notions of the public and private?
  • Do these technologies make us more pro-social or do they inhibit sharing?

The Facebook Psychology ‘experiment’ which manipulated the emotional content of nearly 700,000 users provides evidence that corporations need to have review procedures in terms of ethics that universities of been developing for some years surrounding social media research. In a university context, Institutional Review Boards (IRBs) are responsible for monitoring the ethics of any research conducted at the University. The US government’s Department of Health and Human Services publishes very detailed guidance for human subjects research. Section 2(a) of their IRB guidelines states that “for the IRB to approve research […] criteria include, among other things […] risks, potential benefits, informed consent, and safeguards for human subjects”. Most IRB’s take this mission quite seriously and err on the side of caution as people’s welfare is at stake.

The reason for this is simply to protect human subjects. Indeed, part of IRB reviews also evaluate whether particularly vulnerable populations (e.g. minors, people with mental/physical disabilities, women who are pregnant, and various other groups depending on context) are not additionally harmed due to research conducted. Animal research protocols follow similar logics. Before University researchers conduct social research, the ethical implications of the research are broadly evaluated within ethics and other criteria. If any human subject is participating in a social experiment or any social research, most studies either require signed informed consent or a similar protocol which informs participants of any risks associated with the research and allows them the option to opt out if they do not agree with the risks or any other parameters of the research.

Therefore, I was tremendously saddened to read the Proceedings of the National Academy of Sciences (PNAS) paper co-authored by Facebook data scientist Adam D. I. Kramer, Jamie E. Guillory of University of California, San Francisco and Jeffrey T. Hancock of Cornell University titled ‘Experimental evidence of massive-scale emotional contagion through social networks’. The authors of this study argue that agreement to Facebook’s ‘Data Use Policy’ constitutes informed consent (p. 8789). The paper uses a Big Data (or in their words ‘massive’) perspective to evaluate emotional behavior on Facebook (of 689,003 users). Specifically, the authors designed an experiment with a control and experimental group in which they manipulated the emotional sentiment of content in a selection of Facebook users’ feeds to omit positive and negative text content. Their conclusion was that the presence of positive emotion in feed content encouraged the user to post more positive emotional content. They also found that the presence of negative emotion in feed content encouraged the production of negative content (hence the disease metaphor of contagion). In my opinion, any potential scientific value of these findings (despite how valuable they may be) is outweighed by gross ethical negligence.

This experiment should have never gone ahead. Why? Because manipulating people’s emotional behavior ALWAYS involves risks. Or as Walden succinctly put it ‘Facebook intentionally made thousands upon thousands of people sad.’

In some cases, emotional interventions may be thought to be justifiable by participants. But, it is potential research subjects who should (via informed consent) make that decision. Without informed consent, a researcher is playing God. And the consequences are steep. In the case of the Facebook experiment, hundreds of thousands of users were subjected to negative content in their feeds. We do not know if suicidal users were part of the experimental group or individuals with severe depression, eating disorders, or conditions of self-harm. We will never know what harm this experiment did (which could have even lead to a spectrum of harm from low-level malaise to suicide). Some users had a higher percentage of positive/negative content omitted (between 10%-90% according to Kramer and his authors. Importantly, some users had up to 90% of positive content stripped out of their feeds, which is significant. And users stripped of negative content can argue social engineering.

To conduct a psychological experiment that is properly scientific, ethics needs to be central. And this is truly not the case here. Facebook and its academic co-authors have conducted bad science and give the field of data science a bad name. PNAS is a respected journal and anyone submitting should have complied with accepted ethical guidelines regardless of the fact that Facebook is not an academic institution. Additionally, two of the authors are at academic institutions and, as such, have professional ethical standards to adhere to. In the case of the lead author from Facebook, the company’s Data Use Policy has been used as a shockingly poor proxy for a full human subjects review with informed consent. What is particularly upsetting is that this was an experiment that probably did real harm. Some have argued that at least Facebook published their experiment while other companies are ultra-secretive. Rather than praising Facebook for this, such experiments cast light on the major ethical issues behind corporate research of our online data and our need to bring these debates into the public sphere.