Sentiment analysis and emotional contagion are nothing new, but Facebook’s recent research study, dubbed by the media the “emotion manipulation” study has launched heated debates regarding the accuracy of the research and the ethics of performing experiments on people without their knowledge or consent.
Sentiment analysis is the study of positive and negative words in communication and has been employed in various fields, including traditional and social media marketing, brand analysis, poll predicting, and even dream analysis. In today’s big-data-driven world, algorithms are used to analyze text in an effort to distinguish what emotions are behind the words. As the algorithms improve, the analysis will as well, but some argue that the algorithms are far from where they need to be for accurate analysis. Recently, Facebook conducted a sentiment analysis on possible emotional contagion via status messages.
Emotional contagion has been theorized for centuries and continues to be researched heavily by modern psychologists such as Elaine Hatfield of the University of Hawaii, who states that emotional contagion, “may tell us something about the awesome contemporary power of celebrityhood and of the mass media as these agencies of large-scale emotional and cognitive contagion continue to expand their capacities to define reality for billions of people.” However, the media and thousands of Facebook users were not convinced that Facebook’s expanded capacities were worth their feeds being “manipulated” to mine and influence their emotional responses.
The controversy stems from how Facebook used sentiment analysis to perform research into emotional contagion, whether their findings were scientifically accurate, and whether they violated peoples’ rights. It all began with a simple study, made public back in March. Facebook defended this study by citing that their policies gave them the right to use user information to improve their product. In May, an update to Facebook’s policies to include the word “research” caused a flurry of outrage over Facebook’s treatment of its users and questions into the legitimacy of their findings. Some were so outraged that they filed a complaint with the FTC–on which no decision has yet been made.
In this article, we’ll review the study causing so much outrage, and a few of the responses defending and condemning it.
Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks, by Adam D. I. Kramer, Jamie E. Guillory, and Jeffrey T. Hancock
The article that started it all was published in PNAS (Proceedings of the National Academy of Science) on March 25, 2014. The relevant research was conducted in January of 2012 by members of Facebook’s Core Data Science Team and Cornell University’s Departments of Communication and Information Science.
The study claims that, “emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness,” and that, “emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues.”
It isn’t yet clear how exactly Facebook is using the information they gained from this study or why they did the study in the first place, but it could potentially have broad implications. If Facebook and other social media sites are able to not only predict how people feel, but to alter it, that opens up a new and powerful avenue for advertisers to profit from their social media marketing. One could view this as an exciting way to reach the right audience, or a questionable method for manipulating people for profit.
Unbelievable: Facebook Didn’t Update its Data Policy until after the Emotion Study, by Nathaniel Mott
On July 1, Nathaniel Mott made the now 3-month old study into news, when he pointed out that the arguments that Facebook has used to defend its choice to do this research on unknowing Facebook users are faulty. The study occurred in January 2012, but Facebook didn’t update its policy to specifically include verbiage about using customer data for research until May 2014. There are also concerns that some of the research subjects were under 18.
One of the biggest concerns is that, if Facebook can successfully profit from emotional manipulation, it probably won’t be long before other social media sites, companies, and perhaps even government entities will follow suit and try to influence people’s emotions in small, unnoticeable ways to further profit or an agenda. As Nathaniel stated, “It’s clear that this study has made people more concerned about the effect companies like Facebook, which wield a measure of control over how we interact with the digital world, can have on our lives.”
Don’t Worry, Facebook Still has No Clue how You Feel, by Marcus Wohlsen
The very day after Mott’s article, Marcus Wholsen followed up with a discussion on the flaws in the accuracy of Facebook’s research itself. He points out that algorithms have a long way to go before they can actually figure out the nuances of emotions expressed in our communications and that sentiment analysis doesn’t account for sarcasm, idioms, and other subtleties that require complex human interpretation.
However, the fact that the study may be flawed may not really matter. As he states, “The better Facebook can train computers to ‘know’ you, the more effective targeting it can promise advertisers. And to sell to you, Facebook doesn’t have to know you perfectly. It just has to make a better guess than the competition.”
It’s important to consider Marcus’s point of view about the legitimacy of Facebook’s findings because it reminds us that the situation may not be as dire as some have claimed. If sentiment analysis cannot accurately predict (or alter) people’s emotions, perhaps there isn’t so much to fear after all.
Privacy Group Files FTC Complaint over Facebook’s ’Emotional Contagion’ Study, by Nancy Weil
By July 3rd, the study was trending on Twitter, CNN, and Facebook itself. Facebook users felt violated and manipulated. Just three days after the study first made major headlines, Nancy Weil, managing editor at IDG, reported on a complaint filed against Facebook by the Electronic Privacy Information Center (EPIC) with the Federal Trade Commission (FTC).
The EPIC complaint claims that Facebook violated a 20-year consent decree requiring the protection of users’ privacy and that, “Facebook ‘purposefully messed with people’s minds’ in a ‘secretive and non-consensual’ study on nearly 700,000 users whose emotions were intentionally manipulated when the company altered their news feeds for research purposes.”
Facebook defended itself by stating, “When someone signs up for Facebook, we’ve always asked permission to use their information to provide and enhance the services we offer. To suggest we conducted any corporate research without permission is complete fiction. Companies that want to improve their services use the information their customers provide, whether their privacy policy uses the word ‘research’ or not.”
It’s not yet known whether the FTC will sanction Facebook for its actions or whether Facebook’s policies protect them, as they claim.
FTC Asked to ’Evaluate’ Facebook’s Research Methods—but Users don’t Care, By Kate Kaye
On July 9, Senator Mark Warner asked the FTC to consider more stringent guidelines for conducting research. However, as Kate Kaye of Advertising Age pointed out on July 14, it seems that while the media, privacy groups, user researchers, government officials, and some users were outraged, it hasn’t had much of an affect on Facebook logins.
People often use their Facebook credentials to log into a myriad of other sites–it’s a simple way to use one login for most of the sites that they visit on a daily basis. These logins have not seen much of an impact according to Kaye, “Since June 28, when the Facbook research news started to percolate, logins via Facebook account information on non-Facebook sites barely budged, dipping from 43.6% to 43.3%.”
Ms. Kaye didn’t delve into the possible reasons for this indifference by users, but it seems that people are willing to forgive potential privacy infringements in order to make their busy days a bit easier. In this age of online sharing and openness, it’s possible that we’ve become desensitized to privacy violations. It’s also possible that Facebook is such a big part of people’s lives, that a bad user experience and a hit on their “trustworthy” brand isn’t enough to make users abandon their well-tended Facebook profiles.
What’s Next?
It’s possible that Facebook will see a drop in users because of their outrage, but it’s doubtful. 99 Days of Freedom, a campaign started on July 9 encouraging people to see if they are happier after not using Facebook for 99 days hints at some users’ dissatisfaction. However, as of the publishing of this article, only 25,294 people have accepted the challenge–a very small dent in Facebook’s approximately 1.23 billion active monthly users.
Even if the FTC and other regulators decide that Facebook was within their rights and even if Facebook is “too big to fail,” the more important conversation is the value and morality of using analytics to interpret and influence emotions and the effect that research can have on the overall user experience.
Whether you are a Facebook user or not, this research study brings up some important questions we should all consider. Are we required to participate in such studies, as a result of signing Terms and Conditions that very few of us actually read? Or are we entitled to privacy across the Internet?