In case you missed it, about a week ago Facebook published details about their extensive emotion experiment conducted in January 2012.
Facebook is constantly using forms of manipulation on their website in order to improve, develop and test user experience; as most websites and applications do – through ad placement and recommendations based on things the user has viewed before. But this time was a little different. The main argument against Facebook’s actions is the seeming lack of thought put towards the ethical implications involved in this study.
Researchers conducted the study on 689,003 users to investigate the phenomenon known as “emotional contagion”. It was found that emotional states can be triggered by content and spread across social networks. The official outline of the social experiment on the Proceedings of the National Academy of Sciences (PNAS) journal website reads:
“We show, via a massive (N = 689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues.”
Of course, this example of data manipulation made a lot of Facebook users angry. The research, conducted internally and, it is argued, without user consent, sparked uproar among the experiment subjects and the wider Facebook community who felt that the study violated personal privacy and ignored basic online ethical practices, as users’ news feeds were manipulated to assess effects on their emotions.
Facebook and associate researchers at Cornell University and the University of California have obviously been experiencing a huge amount of backlash since the release of their experiment aims and outcomes. In particular, they are facing formal complaints from the US privacy group Electronic Privacy Information Centre (EPIC), who filed against their unethical use of private user data with the Federate Trade Commission (FTC). Another investigation is being conducted by the Information Commissioner’s Office (ICO) who are looking into whether or not the site breached its own data regulations during the study. We can assume that these privacy issue battles will be ongoing for a while, however, users have taken the situation into their own hands and spoken out; urging people to delete their Facebook accounts to prevent any further suffering from privacy and data hacking by the company, but the site remains as popular as it has always been.
Adam Kramer, Facebook data scientist, apologised (sort of) on behalf of the company in this post on his Facebook Wall:
“The reason we did this research is because we care about the emotional impact of Facebook and the people that use our product. We felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out. At the same time, we were concerned that exposure to friends’ negativity might lead people to avoid visiting Facebook. We didn’t clearly state our motivations in the paper.”
The researchers involved stated that the controversial experiment was done with “informed consent” since all users agree to the company’s lengthy Data Use Policy upon signing up, which includes the following statement:
“…In addition to helping people see and find things that you do and share, we may use the information we receive about you… for internal operations, including troubleshooting, data analysis, testing, research and service improvement.”
Regardless of Kramer’s lengthy statement and the data use argument, it seems that most of the initial anxiety and anger regarding the study has blown over the Facebook user community during the week; maybe because we are already well aware of Facebook’s rather intrusive data and privacy user agreements.