11 Facebook Case Studies & Success Stories to Inspire You

Pamela Bump

Published: August 05, 2019

Although Facebook is one of the older social media networks, it's still a thriving platform for businesses who want to boost brand awareness.

Facebook-Case-Studies

With over 2.38 billion monthly active users , you can use the platform to spread the word about your business in a number of different ways -- from photos or videos to paid advertisements.

Because there are so many marketing options and opportunities on Facebook, It can be hard to tell which strategy is actually best for your brand.

If you're not sure where to start, you can read case studies to learn about strategies that marketing pros and similar businesses have tried in the past.

A case study will often go over a brand's marketing challenge, goals, a campaign's key details, and its results. This gives you a real-life glimpse at what led a marketing team to reach success on Facebook. Case studies also can help you avoid or navigate common challenges that other companies faced when implementing a new Facebook strategy.

To help you in choosing your next Facebook strategy, we've compiled a list of 11 great case studies that show how a number of different companies have succeeded on the platform.

Even if your company has a lower budget or sells a different product, we hope these case studies will inspire you and give you creative ideas for your own scalable Facebook strategy.

Free Resource: How to Reach & Engage Your Audience on Facebook

Facebook Brand Awareness Case Studies:

During the 2017 holiday season, the jewelry company Pandora wanted to boost brand awareness in the German market. They also wanted to see if video ads could have the same success as their other Facebook ad formats.

They began this experiment by working with Facebook to adapt a successful TV commercial for the platform. Here's a look at the original commercial:

The ad was cut down to a 15-second clip which shows a woman receiving a Pandora necklace from her partner. It was also cropped into a square size for mobile users. Pandora then ran the ad targeting German audiences between the ages of 18-50. It appeared in newsfeeds and as an in-stream video ad .

Results: According to the case study , the video campaign lifted brand sentiment during the holiday season, with a 10-point lift in favorability. While Pandora or the case study didn't disclose how they measured their favorability score, they note that the lift means that more consumers favored Pandora over other jewelers because of the ad.

Financially, the campaign also provided ROI with a 61% lift in purchases and a 42% increase in new buyers.

Video can be memorable, emotional, and persuasive. While the case study notes that Pandora always had success with ads and purchases, the jeweler saw that a video format could boost brand awareness even further.

In just 15 seconds, Pandora was able to tell a short story that their target audience could identify with while also showing off their product. The increase in favorability shows that audiences who saw the ad connected with it and preferred the jeweler over other companies because of the marketing technique.

Part of Pandora's success might also be due to the video's platform adaptation. Although they didn't create a specific video for the Facebook platform, they picked a commercial that had already resonated with TV audiences and tweaked it to grab attention of fast-paced Facebook users. This is a good example of how a company can be resourceful with the content it already has while still catering to their online audiences.

Rock & Roll Hall of Fame

The Rock & Roll Hall of Fame , a HubSpot customer, wanted to boost brand awareness and get more ticket purchases to their museum. Since they'd mainly used traditional customer outreach strategies in the past, they wanted to experiment with more ways of reaching audiences on social media.

Because the museum's social media team recognized how often they personally used Facebook Messenger, they decided to implement a messaging strategy on the Hall of Fame's official business page.

From the business page, users can click the Get Started button and open a chat with the Hall of Fame. Through the chat, social media managers were able to quickly reply to questions or comments from fans, followers, and prospective visitors. The reps would also send helpful links detailing venue pricing, events, other promotions, and activities in the surrounding area.

Rock & Roll Hall of Fame Social Media Team responds to Facebook Messenger messages

Since the Messenger launch, they claim to have raised their audience size by 81% and sales from prospects by 12%. The company claims that this feature was so successful that they even received 54 messages on an Easter Sunday.

Being available to connect with your audiences through Messenger can be beneficial to your business and your brand. While the Rock & Roll Hall of Fame boosted purchases, they also got to interact with their audiences on a personal level. Their availability might have made them look like a more trustworthy, friendly brand that was actually interested in their fanbase rather than just sales.

Facebook Reach Case Study:

In early 2016, Buffer started to see a decline in their brand reach and engagement on Facebook due to algorithm changes that favored individuals rather than brands. In an effort to prevent their engagement and reach numbers from dropping even further.

The brand decided to cut their posting frequency by 50%. With less time focused on many posts, they could focus more time on creating fewer, better-quality posts that purely aimed at gaining engagement. For example, instead of posting standard links and quick captions, they began to experiment with different formats such as posts with multi-paragraph captions and videos. After starting the strategy in 2016, they continued it through 2018.

Here's an example of one an interview that was produced and shared exclusively on Facebook.

The Results: By 2018, Buffer claimed that the average weekly reach nearly tripled from 44,000 at the beginning of the experiment to 120,000. The page's average daily engagements also doubled from roughly 500 per day to around 1,000.

In 2018, Buffer claimed that their posts reached between 5,000 to 20,000 people, while posts from before the experiment reached less than 2,000.

Although Buffer began the experiment before major Facebook algorithm changes , they updated this case study in 2018 claiming that this strategy has endured platform shifts and is still providing them with high reach and engagement.

It can be easy to overpost on a social network and just hope it works. But constant posts that get no reach or engagement could be wasted your time and money. They might even make your page look desperate.

What Buffer found was that less is more. Rather than spending your time posting whatever you can, you should take time to brainstorm and schedule out interesting posts that speak directly to your customer.

Facebook Video Views Case Studies:

Gearing up for Halloween in 2016, Tomcat, a rodent extermination company, wanted to experiment with a puppet-filled, horror-themed, live video event. The narrative, which was created in part by their marketing agency, told the story of a few oblivious teenage mice that were vacationing in a haunted cabin in the woods. At peak points of the story, audiences were asked to use the comments to choose which mouse puppet would die next or how they would die.

Prior to the video event, Tomcat also rolled out movie posters with the event date, an image of the scared mouse puppets, and a headline saying, "Spoiler: They all die!"

Results: It turns out that a lot of people enjoy killing rodents. The live video got over 2.3 million unique views , and 21% of them actively participated. As an added bonus, the video also boosted Tomcat's Facebook fanbase by 58% and earned them a Cyber Lion at the 2017 Cannes Lions awards.

Here's a hilarious sizzle reel that shows a few clips from the video and a few key stats:

This example shows how creative content marketing can help even the most logistical businesses gain engagement. While pest control can be a dry topic for a video, the brand highlighted it in a creative and funny way.

This study also highlights how interactivity can provide huge bonuses when it comes to views and engagement. Even though many of the viewers knew all the rats would die, many still participated just because it was fun.

Not only might this peak brand interest from people who hadn't thought that deeply about pest control, but interactivity can also help a video algorithmically. As more people comment, share, and react to a live video, there's more likelihood that it will get prioritized and displayed in the feeds of others.

In 2017, HubSpot's social media team embarked on an experiment where they pivoted their video goals from lead generation to audience engagement. Prior to this shift, HubSpot had regularly posted Facebook videos that were created to generate leads. As part of the new strategy, the team brainstormed a list of headlines and topics that they thought their social media audience would actually like, rather than just topics that would generate sales.

Along with this pivot, they also experimented with other video elements including video design, formatting, and size .

Results: After they started to launch the audience-friendly videos, they saw monthly video views jump from 50,000 to 1 million in mid-2017.

Creating content that caters to your fanbase's interests and the social platform it's posted on can be much more effective than content that seeks out leads.

While videos with the pure goal of selling a product can fall flat with views and engagement, creative videos that intrigue and inform your audiences about a topic they relate to can be a much more effective way to gain and keep your audience. Once the audience trusts you and consumes your content regularly, they might even trust and gain interest in your products.

Facebook App Installs Case Study:

Foxnext games.

FoxNext Games, a video game company owned by 20th Century Fox, wanted to improve the level of app installs for one of its newest releases, Marvel Strike Force. While FoxNext had previously advertised other games with Facebook video ads, they wanted to test out the swipe-able photo carousel post format. Each photo, designed like a playing card, highlighted a different element of the game.

Marvel Strike Force playing card carousel on Facebook

The add offered a call-to-action button that said "Install Now" and lead to the app store where it could be downloaded. FoxNext launched it on both Facebook and Instagram. To see if the carousel was more efficient than video campaigns, they compared two ads that advertised the same game with each format.

Results: According to Facebook , the photo ads delivered a 6% higher return on ad spend, 14% more revenue, 61% more installs, and 33% lower cost per app install.

Takeaways If your product is visual, a carousel can be a great way to show off different elements of it. This case study also shows how designing ads around your audience's interest can help each post stand out to them. In this scenario, FoxNext needed to advertise a game about superheroes. They knew that their fanbase was interested in gaming, adventure, and comic books, so they created carousels that felt more like playing cards to expand on the game's visual narrative.

Facebook Lead Gen Case Study:

Major impact media.

In 2019, Major Impact Media released a case study about a real-estate client that wanted to generate more leads. Prior to working with Major Impact, the Minneapolis, Minnesota brokerage hired another firm to build out an online lead generation funnel that had garnered them no leads in the two months it was active. They turned to Major Impact looking for a process where they could regularly be generating online leads.

As part of the lead generation process, the marketing and brokerage firms made a series of Facebook ads with the lead generation objective set. Major Impact also helped the company build a CRM that could capture these leads as they came in.

Results: Within a day, they received eight leads for $2.45 each. In the next 90 days, the marketing firm claimed the ads generated over 370 local leads at the average cost of $6.77 each. Each lead gave the company their name, email, and phone number.

Although these results sound like a promising improvement, readers of this case study should keep in mind that no number of qualified leads or ROI was disclosed. While the study states that leads were gained, it's unclear which of them lead to actual sales -- if any.

This shows how Facebook ad targeting can be helpful when you're seeking out leads from a specific audience in a local area. The Minneapolis brokerage's original marketing and social media strategies weren't succeeding because they were looking for a very specific audience of prospective buyers in the immediate area.

Ad targeting allowed their posts to be placed on the news feeds of people in the area who might be searching for real estate or have interests related to buying a home. This, in turn, might have caused them more success in gaining leads.

Facebook Engagement Case Study:

When the eyewear brand Hawkers partnered up with Spanish clothing brand El Ganso for a joint line of sunglasses, Hawkers' marketing team wanted to see which Facebook ad format would garner the most engagement. Between March and April of 2017, they launched a combination of standard ads and collection ads on Facebook.

While their standard ads had a photo, a caption and a call-to-action linking to their site, the collection ads offered a header image or video, followed by smaller images of sunglasses from the line underneath.

Hawkers collection style Facebook ad

Image from Digital Training Academy

To A/B test ad effectiveness of the different ad types, Hawkers showed half of its audience standard photo ads while the other half were presented with the collection format. The company also used Facebook's Audience Lookalike feature to target the ads their audiences and similar users in Spain.

Results: The collection ad boosted engagement by 86% . The collection ads also saw a 51% higher rate of return than the other ads.

This study shows how an ad that shows off different elements of your product or service could be more engaging to your audience. With collection ads, audiences can see a bunch of products as well as a main image or video about the sunglass line. With a standard single photo or video, the number of products you show might be limited. While some users might not respond well to one image or video, they might engage if they see a number of different products or styles they like.

Facebook Conversion Case Study:

Femibion from merck.

Femibion, a German family-planning brand owned by Merck Consumer Health, wanted to generate leads by offering audiences a free baby planning book called "Femibion BabyPlanung." The company worked with Facebook to launch a multistage campaign with a combination of traditional image and link ads with carousel ads.

The campaign began with a cheeky series of carousel ads that featured tasteful pictures of "baby-making places," or locations where women might conceive a child. The later ads were a more standard format that displayed an image of the book and a call-to-action.

When the first ads launched in December 2016, they were targeted to female audiences in Germany. In 2017, during the later stages of the campaign, the standard ads were retargeted to women who had previously interacted with the carousel ads. With this strategy, people who already showed interest would see more ads for the free product offer. This could cause them to remember the offer or click when they saw it a second time.

Results: By the time the promotion ended in April 2017, ads saw a 35% increase in conversion rate. The company had also generated 10,000 leads and decreased their sample distribution cost by two times.

This case study shows how a company successfully brought leads through the funnel. By targeting women in Germany for their first series of creative "baby-making" ads, they gained attention from a broad audience. Then, by focusing their next round of ads on women who'd already shown some type of interest in their product, they reminded those audiences of the offer which may have enabled those people to convert to leads.

Facebook Product Sales Case Study

In an effort to boost sales from its Latin American audiences, Samsung promoted the 2015 Argentina launch of the Galaxy S6 smartphone with a one-month Facebook campaign.

The campaign featured three videos that highlighted the phone's design, camera, and long battery life respectively.

One video was released each week and all of them were targeted to men and women in Argentina. In the fourth week of the campaign, Samsung launched more traditional video and photo ads about the product. These ads were specifically targeted to people who'd engaged with the videos and their lookalike audiences.

Results: Samsung received 500% ROI from the month-long campaign and a 7% increase in new customers.

Like Femibion, Samsung tested a multiple ad strategy where the targeting got more specific as the promotions continued. They too saw the benefit of targeting ads to users who already showed interest in the first rounds of advertisements. This strategy definitely seems like one that could be effective when trying to gain more qualified leads.

Facebook Store Visits Case Study:

Church's chicken.

The world's third-largest chicken restaurant, Church's Chicken, wanted to see if they could use Facebook to increase in-restaurant traffic. From February to October of 2017, the chain ran a series of ads with the "Store Traffic" ad objectives. Rather than giving customers a link to a purchasing or order page, these ads offer users a call-to-action that says "Get Directions." The dynamic store-traffic ad also gives users the store information for the restaurant closest to them.

Church Chicken Facebook ad highlighting location

Image from Facebook

The ads ran on desktop and mobile newsfeeds and were targeted at people living near a Church's Chicken who were also interested in "quick-serve restaurants." The study also noted that third-party data was used to target customers who were "big spenders" at these types of restaurants.

To measure the results, the team compared data from Facebook's store-reporting feature with data from all of its locations.

Results: The ads resulted in over 592,000 store visits with an 800% ROI. Each visit cost the company an average of $1.14. The ROI of the campaign was four times the team's return goal.

If you don't have an ecommerce business, Facebook ads can still be helpful for you if they're strategized properly. In this example, Church's ads targeted locals who like quick-serve restaurants and served them a dynamic ad with text that notified them of a restaurant in their direct area. This type of targeting and ad strategy could be helpful to small businesses or hyperlocal businesses that want to gain foot traffic or awareness from the prospective customers closest to them.

Navigating Case Studies

If you're a marketer that wants to execute proven Facebook strategies, case studies will be incredibly helpful for you. If the case studies on the list above didn't answer one of your burning Facebook questions, there are plenty of other resources and success stories online.

As you look for a great case study to model your next campaign strategy, look for stories that seem credible and don't feel too vague. The best case studies will clearly go over a company's mission, challenge or mission, process, and results.

Because many of the case studies you'll find are from big businesses, you might also want to look at strategies that you can implement on a smaller scale. For example, while you may not be able to create a full commercial at the production quality of Pandora, you might still be able to make a lower-budget video that still conveys a strong message to your audience.

If you're interested in starting a paid campaign, check out this helpful how-to post . If you just want to take advantage of free options, we also have some great information on Facebook Live and Facebook for Business .

generate leads with facebook

Don't forget to share this post!

Related articles.

25 of the Best Facebook Pages We've Ever Seen

25 of the Best Facebook Pages We've Ever Seen

7 Brands With Brilliant Facebook Marketing Strategies, and Why They Work

7 Brands With Brilliant Facebook Marketing Strategies, and Why They Work

10 Brands Whose Visual Facebook Content Tickles Our Funny Bone

10 Brands Whose Visual Facebook Content Tickles Our Funny Bone

9 Excellent Examples of Brands Using Facebook's New Page Design

9 Excellent Examples of Brands Using Facebook's New Page Design

 alt=

6 Facebook Marketing Best Practices

 alt=

Facebook Fan Page Best Practices with Mari Smith [@InboundNow #18]

 alt=

7 Awesome B2B Facebook Fan Pages

Learn how to maximize the value of your marketing and ad spend on Meta platforms Facebook and Instagram.

Marketing software that helps you drive revenue, save time and resources, and measure and optimize your investments — all on one easy-to-use platform

To revisit this article, visit My Profile, then View saved stories .

  • Backchannel
  • Newsletters
  • WIRED Insider
  • WIRED Consulting

Michelle N. Meyer

Everything You Need to Know About Facebook's Controversial Emotion Experiment

facebookmanipulate

The closest any of us who might have participated in Facebook's huge social engineering study came to actually consenting to participate was signing up for the service. Facebook's Data Use Policy warns users that Facebook “may use the information we receive about you…for internal operations, including troubleshooting, data analysis, testing, research and service improvement.” This has led to charges that the study violated laws designed to protect human research subjects. But it turns out that those laws don’t apply to the study, and even if they did, it could have been approved, perhaps with some tweaks. Why this is the case requires a bit of explanation.

#### Michelle N. Meyer

##### About

[Michelle N. Meyer](http://www.michellenmeyer.com/) is an Assistant Professor and Director of Bioethics Policy in the Union Graduate College-Icahn School of Medicine at Mount Sinai Bioethics Program, where she writes and teaches at the intersection of law, science, and philosophy. She is a member of the board of three non-profit organizations devoted to scientific research, including the Board of Directors of [PersonalGenomes.org](http://personalgenomes.org/).

For one week in 2012, Facebook altered the algorithms it uses to determine which status updates appeared in the News Feed of 689,003 randomly selected users (about 1 of every 2,500 Facebook users). The results of this study were just published in the Proceedings of the National Academy of Sciences (PNAS).

As the authors explain, “[b]ecause people’s friends frequently produce much more content than one person can view,” Facebook ordinarily filters News Feed content “via a ranking algorithm that Facebook continually develops and tests in the interest of showing viewers the content they will find most relevant and engaging.” In this study, the algorithm filtered content based on its emotional content. A post was identified as “positive” or “negative” if it used at least one word identified as positive or negative by software (run automatically without researchers accessing users’ text).

Some critics of the experiment have characterized it as one in which the researchers intentionally tried “to make users sad.” With the benefit of hindsight, they claim that the study merely tested the perfectly obvious proposition that reducing the amount of positive content in a user’s News Feed would cause that user to use more negative words and fewer positive words themselves and/or to become less happy (more on the gap between these effects in a minute). But that’s *not *what some prior studies would have predicted.

Previous studies both in the U.S. and in Germany had found that the largely positive, often self-promotional content that Facebook tends to feature has made users feel bitter and resentful---a phenomenon the German researchers memorably call “the self-promotion-envy spiral.” Those studies would have predicted that reducing the positive content in a user’s feed might actually make users less sad. And it makes sense that Facebook would want to determine what will make users spend more time on its site rather than close that tab in disgust or despair. The study’s first author, Adam Kramer of Facebook, confirms ---on Facebook, of course---that they did indeed want to investigate the theory that seeing friends’ positive content makes users sad.

To do so, the researchers conducted two experiments, with a total of four groups of users (about 155,000 each). In the first experiment, Facebook reduced the positive content of News Feeds. Each positive post “had between a 10-percent and 90-percent chance (based on their User ID) of being omitted from their News Feed for that specific viewing.” In the second experiment, Facebook reduced the negative content of News Feeds in the same manner. In both experiments, these treatment conditions were compared with control conditions in which a similar portion of posts were randomly filtered out (i.e., without regard to emotional content). Note that whatever negativity users were exposed to came from their own friends, not, somehow, from Facebook engineers. In the first, presumably most objectionable, experiment, the researchers chose to filter out varying amounts (10 percent to 90 percent) of friends’ positive content, thereby leaving a News Feed more concentrated with posts in which a user’s friend had written at least one negative word.

The results:

Millions of Americans Might Lose Internet Access Today. Here’s What You Need to Know

Boone Ashworth

Elon Musk Can’t Solve Tesla’s China Crisis With His Desperate Asia Visit

Carlton Reid

Recruiters Are Going Analog to Fight the AI Application Overload

Amanda Hoover

How to Use ChatGPT’s Memory Feature

Reece Rogers

[f]or people who had positive content reduced in their News Feed, a larger percentage of words in people’s status updates were negative and a smaller percentage were positive. When negativity was reduced, the opposite pattern occurred. These results suggest that the emotions expressed by friends, via online social networks, influence our own moods, constituting, to our knowledge, the first experimental evidence for massive-scale emotional contagion via social networks.

Note two things. First, while statistically significant, these effect sizes are, as the authors acknowledge, quite small. The largest effect size was a mere two hundredths of a standard deviation (d = .02). The smallest was one thousandth of a standard deviation (d = .001). The authors suggest that their findings are primarily significant for public health purposes, because when the aggregated, even small individual effects can have large social consequences.

Second, although the researchers conclude that their experiments constitute evidence of “social contagion” ---that is, that “emotional states can be transferred to others”--- this overstates what they could possibly know from this study. The fact that someone exposed to positive words very slightly increased the amount of positive words that she then used in her Facebook posts does not necessarily mean that this change in her News Feed content caused any change in her mood . The very slight increase in the use of positive words could simply be a matter of keeping up (or down, in the case of the reduced positivity experiment) with the Joneses. It seems highly likely that Facebook users experience (varying degrees of) pressure to conform to social norms about acceptable levels of snark and kvetching---and of bragging and pollyannaisms. Someone who is already internally grumbling about how United Statesians are, like, such total posers during the World Cup may feel freer to voice that complaint on Facebook than when his feed was more densely concentrated with posts of the “human beings are so great and I feel so lucky to know you all---group hug! feel any worse than they otherwise would have, much less for any increase in negative affect that may have occurred to have risen to the level of a mental health crisis, as some have suggested.

One threshold question in determining whether this study required ethical approval by an Internal Review Board is whether it constituted “human subjects research.” An activity is considered “ research ” under the federal regulations, if it is “a systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge.” The study was plenty systematic, and it was designed to investigate the “self-promotion-envy spiral” theory of social networks. Check.

As defined by the regulations, a “human subject” is “a living individual about whom an investigator... obtains,” inter alia, “data through intervention.” Intervention, in turn, includes “manipulations of the subject or the subject’s environment that are performed for research purposes.” According to guidance issued by the Office for Human Research Protection s (OHRP), the federal agency tasked with overseeing application of the regulations to HHS-conducted and –funded human subjects research, “orchestrating environmental events or social interactions” constitutes manipulation.

I suppose one could argue---in the tradition of choice architecture---that to say that Facebook manipulated its users’ environment is a near tautology. Facebook employs algorithms to filter News Feeds, which it apparently regularly tweaks in an effort to maximize user satisfaction, ideal ad placement, and so on. It may be that Facebook regularly changes the algorithms that determine how a user experiences her News Feed.

Given this baseline of constant manipulation, you could say that this study did not involve any incremental additional manipulation. No manipulation, no intervention. No intervention, no human subjects. No human subjects, no federal regulations requiring IRB approval. But…

That doesn’t mean that moving from one algorithm to the next doesn’t constitute a manipulation of the user’s environment. Therefore, I assume that this study meets the federal definition of “human subjects research” (HSR).

Importantly---and contrary to the apparent beliefs of some commentators---not all HSR is subject to the federal regulations, including IRB review. By the terms of the regulations themselves , HSR is subject to IRB review only when it is conducted or funded by any of several federal departments and agencies (so-called Common Rule agencies), or when it will form the basis of an FDA marketing application . HSR conducted and funded solely by entities like Facebook is not subject to federal research regulations.

But this study was not conducted by Facebook alone; the second and third authors on the paper have appointments at the University of California, San Francisco, and Cornell, respectively. Although some commentators assume that university research is only subject to the federal regulations when that research is funded by the government, this, too, is incorrect. Any college or university that accepts any research funds from any Common Rule agency must sign a Federalwide Assurance (FWA) , a boilerplate contract between the institution and OHRP in which the institution identifies the duly-formed and registered IRB that will review the funded research. The FWA invites institutions to voluntarily commit to extend the requirement of IRB review from funded projects to all human subject research in which the institution is engaged, regardless of the source of funding. Historically, the vast majority of colleges and universities have agreed to “check the box,” as it’s called. If you are a student or a faculty member at an institution that has checked the box, then any HSR you conduct must be approved by an IRB.

As I recently had occasion to discover , Cornell has indeed checked the box ( see #5 here ). UCSF appears to have done so , as well, although it’s possible that it simply requires IRB review of all HSR by institutional policy, rather than FWA contract.

But these FWAs only require IRB review if the two authors’ participation in the Facebook study meant that Cornell and UCSF were “engaged” in research. When an institution is “engaged in research” turns out to be an important legal question in much collaborative research, and one the Common Rule itself doesn’t address. OHRP, however, has issued (non-binding, of course) guidance on the matter. The general rule is that an institution is engaged in research when its employee or agent obtains data about subjects through intervention or interaction, identifiable private information about subjects, or subjects’ informed consent.

According to the author contributions section of the PNAS paper, the Facebook-affiliated author “performed [the] research” and “analyzed [the] data.” The two academic authors merely helped him design the research and write the paper. They would not seem to have been involved, then, in obtaining either data or informed consent. (And even if the academic authors had gotten their hands on individualized data, so long as that data remained coded by Facebook user ID numbers that did not allow them to readily ascertain subjects’ identities, OHRP would not consider them to have been engaged in research.)

>Because the two academic authors merely designed the research and wrote the paper, they would not seem to have been involved, then, in obtaining either data or informed consent.

It would seem, then, that neither UCSF nor Cornell was "engaged in research" and, since Facebook was engaged in HSR but is not subject to the federal regulations, that IRB approval was not required. Whether that’s a good or a bad thing is a separate question, of course. (A previous report that the Cornell researcher had received funding from the Army Research Office, which as part of the Department of Defense, a Common Rule agency, would have triggered IRB review, has been retracted.) In fact, as this piece went to press on Monday afternoon, Cornell’s media relations had just issued a statement that provided exactly this explanation for why it determined that IRB review was not required.

Princeton psychologist Susan Fiske, who edited the PNAS article, told a Los Angeles Times reporter the following :

BrVglbeCAAArME4

But then Forbes reported that Fiske “misunderstood the nature of the approval. A source familiar with the matter says the study was approved only through an internal review process at Facebook, not through a university Institutional Review Board.”

Most recently, Fiske told the Atlantic that Cornell's IRB did indeed review the study, and approved it as having involved a "pre-existing dataset." Given that, according to the PNAS paper, the two academic researchers collaborated with the Facebook researcher in designing the research, it strikes me as disingenuous to claim that the dataset preexisted the academic researchers' involvement. As I suggested above, however, it does strike me as correct to conclude that, given the academic researchers' particular contributions to the study, neither UCSF nor Cornell was engaged in research, and hence that IRB review was not required at all.

>It strikes me as disingenuous to claim that the dataset preexisted the academic researchers' involvement.

But if an IRB had reviewed it, could it have approved it, consistent with a plausible interpretation of the Common Rule? The answer, I think, is Yes, although under the federal regulations, the study ought to have required a bit more informed consent than was present here (about which more below).

Many have expressed outrage that any IRB could approve this study, and there has been speculation about the possible grounds the IRB might have given. The Atlantic suggests that the “experiment is almost certainly legal. In the company’s current terms of service, Facebook users relinquish the use of their data for ‘data analysis, testing, [and] research.’” But once a study is under an IRB’s jurisdiction, the IRB is obligated to apply the standards of informed consent set out in the federal regulations , which go well, well beyond a one-time click-through consent to unspecified “research.” Facebook’s own terms of service are simply not relevant. Not directly, anyway.

>Facebook’s own terms of service are simply not relevant. Not directly, anyway.

According to Prof. Fiske’s now-uncertain report of her conversation with the authors, by contrast, the local IRB approved the study “on the grounds that Facebook apparently manipulates people’s News Feeds all the time.” This fact actually is relevant to a proper application of the Common Rule to the study.

Here’s how. Section 46.116(d) of the regulations provides:

An IRB may approve a consent procedure which does not include, or which alters, some or all of the elements of informed consent set forth in this section, or waive the requirements to obtain informed consent provided the IRB finds and documents that: 1. The research involves no more than minimal risk to the subjects; 2. The waiver or alteration will not adversely affect the rights and welfare of the subjects; 3. The research could not practicably be carried out without the waiver or alteration; and 4. Whenever appropriate, the subjects will be provided with additional pertinent information after participation.

The Common Rule defines “minimal risk” to mean “that the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life.” The IRB might plausibly have decided that since the subjects’ environments, like those of all Facebook users, are constantly being manipulated by Facebook, the study’s risks were no greater than what the subjects experience in daily life as regular Facebook users, and so the study posed no more than “minimal risk” to them.

That strikes me as a winning argument, unless there’s something about this manipulation of users’ News Feeds that was significantly riskier than other Facebook manipulations. It’s hard to say, since we don’t know all the ways the company adjusts its algorithms---or the effects of most of these unpublicized manipulations.

Even if you don’t buy that Facebook regularly manipulates users’ emotions (and recall, again, that it’s not clear that the experiment in fact did alter users’ emotions), other actors intentionally manipulate our emotions every day. Consider “ fear appeals ”---ads and other messages intended to shape the recipient’s behavior by making her feel a negative emotion (usually fear, but also sadness or distress). Examples include “scared straight” programs for youth warning of the dangers of alcohol, smoking, and drugs, and singer-songwriter Sarah McLachlan’s ASPCA animal cruelty donation appeal (which I cannot watch without becoming upset—YMMV--and there’s no way on earth I’m being dragged to the “ emotional manipulation ” that is, according to one critic, The Fault in Our Stars).

Continuing with the rest of the § 46.116(d) criteria, the IRB might also plausibly have found that participating in the study without Common Rule-type informed consent would not “adversely effect the rights and welfare of the subjects,” since Facebook has limited users’ rights by requiring them to agree that their information may be used “for internal operations, including troubleshooting, data analysis, testing, research and service improvement.”

__Finally, the study couldn’t feasibly have been conducted with full Common Rule-style informed consent--which requires a statement of the purpose of the research and the specific risks that are foreseen--without biasing the entire study. __Of course, surely the IRB, without biasing the study, could have required researchers to provide subjects with some information about this specific study beyond the single word “research” that appears in the general Data Use Policy, as well as the opportunity to decline to participate in this particular study, and these things should have been required on a plain reading of § 46.116(d).

In other words, the study was probably eligible for “alteration” in some of the elements of informed consent otherwise required by the regulations, but not for a blanket waiver.

Moreover, subjects should have been debriefed by Facebook and the other researchers, not left to read media accounts of the study and wonder whether they were among the randomly-selected subjects studied.

Still, the bottom line is that---assuming the experiment required IRB approval at all---it was probably approvable in some form that involved much less than 100 percent disclosure about exactly what Facebook planned to do and why.

There are (at least) two ways of thinking about this feedback loop between the risks we encounter in daily life and what counts as “minimal risk” research for purposes of the federal regulations.

One view is that once upon a time, the primary sources of emotional manipulation in a person’s life were called “toxic people,” and once you figured out who those people were, you would avoid them as much as possible. Now, everyone’s trying to nudge, data mine, or manipulate you into doing or feeling or not doing or not feeling something, and they have access to you 24/7 through targeted ads, sophisticated algorithms, and so on, and the ubiquity is being further used against us by watering down human subjects research protections.

There’s something to that lament.

The other view is that this bootstrapping is entirely appropriate. If Facebook had acted on its own, it could have tweaked its algorithms to cause more or fewer positive posts in users’ News Feeds even *without *obtaining users’ click-through consent (it’s not as if Facebook promises its users that it will feed them their friends’ status updates in any particular way), and certainly without going through the IRB approval process. It’s only once someone tries to learn something about the effects of that activity and share that knowledge with the world that we throw up obstacles.

>Would we have ever known the extent to which Facebook manipulates its News Feed algorithms had Facebook not collaborated with academics incentivized to publish their findings?

Academic researchers’ status as academics already makes it more burdensome for them to engage in exactly the same kinds of studies that corporations like Facebook can engage in at will. If, on top of that, IRBs didn’t recognize our society’s shifting expectations of privacy (and manipulation) and incorporate those evolving expectations into their minimal risk analysis, that would make academic research still harder, and would only serve to help ensure that those who are most likely to study the effects of a manipulative practice and share those results with the rest of us have reduced incentives to do so. Would we have ever known the extent to which Facebook manipulates its News Feed algorithms had Facebook not collaborated with academics incentivized to publish their findings?

We can certainly have a conversation about the appropriateness of Facebook-like manipulations, data mining, and other 21st-century practices. But so long as we allow private entities to engage freely in these practices, we ought not unduly restrain academics trying to determine their effects. Recall those fear appeals I mentioned above. As one social psychology doctoral candidate noted on Twitter, IRBs make it impossible to study the effects of appeals that carry the same intensity of fear as real-world appeals to which people are exposed routinely, and on a mass scale, with unknown consequences. That doesn’t make a lot of sense. What corporations can do at will to serve their bottom line, and non-profits can do to serve their cause, we shouldn’t make (even) harder---or impossible---for those seeking to produce generalizable knowledge to do

  • This post originally appeared on The Faculty Lounge under the headline How an IRB Could Have Legitimately Approved the Facebook Experiment—and Why that May Be a Good Thing .*

We Finally Know Where Neuralink’s Brain Implant Trial Is Happening

Emily Mullin

Environmental Damage Could Cost You a Fifth of Your Income Over the Next 25 Years

John Timmer, Ars Technica

Europe Rules That Insufficient Climate Change Action Is a Human Rights Violation

Chris Baraniuk

How Will the Solar Eclipse Affect Animals? NASA Needs Your Help to Find Out

Geraldine Castro

Bird Flu Is Spreading in Alarming New Ways

Lyndie Chiou

The Paradox That's Supercharging Climate Change

Everything We Know About Facebook’s Secret Mood-Manipulation Experiment

It was probably legal. But was it ethical?

The Facebook logo superimposed over binary code

Updated, 09/08/14

Facebook’s News Feed—the main list of status updates, messages, and photos you see when you open Facebook on your computer or phone—is not a perfect mirror of the world.

But few users expect that Facebook would change its News Feed in order to manipulate their emotional state.

We now know that’s exactly what happened two years ago. For one week in January 2012, data scientists skewed what almost 700,000 Facebook users saw when they logged into its service. Some people were shown content with a preponderance of happy and positive words; some were shown content analyzed as sadder than average. And when the week was over, these manipulated users were more likely to post either especially positive or negative words themselves.

This tinkering was just revealed as part of a new study , published in the prestigious Proceedings of the National Academy of Sciences . Many previous studies have used Facebook data to examine “emotional contagion,” as this one did. This study is different because, while other studies have observed Facebook user data, this one set out to manipulate them.

The experiment is almost certainly legal. In the company’s current terms of service, Facebook users relinquish the use of their data for “data analysis, testing, [and] research.” Is it ethical, though? Since news of the study first emerged, I’ve seen and heard both privacy advocates and casual users express surprise at the audacity of the experiment.

In the wake of both the Snowden stuff and the Cuba twitter stuff, the Facebook “transmission of anger” experiment is terrifying. — Clay Johnson (@cjoh) June 28, 2014
Get off Facebook. Get your family off Facebook. If you work there, quit. They’re fucking awful. — Erin Kissane (@kissane) June 28, 2014

We’re tracking the ethical, legal, and philosophical response to this Facebook experiment here. We’ve also asked the authors of the study for comment. Author Jamie Guillory replied and referred us to a Facebook spokesman. Early Sunday morning, a Facebook spokesman sent this comment in an email:

This research was conducted for a single week in 2012 and none of the data used was associated with a specific person’s Facebook account. We do research to improve our services and to make the content people see on Facebook as relevant and engaging as possible. A big part of this is understanding how people respond to different types of content, whether it’s positive or negative in tone, news from friends, or information from pages they follow. We carefully consider what research we do and have a strong internal review process. There is no unnecessary collection of people’s data in connection with these research initiatives and all data is stored securely.

And on Sunday afternoon, Adam D.I. Kramer, one of the study’s authors and a Facebook employee, commented on the experiment in a public Facebook post. “And at the end of the day, the actual impact on people in the experiment was the minimal amount to statistically detect it,” he writes. “Having written and designed this experiment myself, I can tell you that our goal was never to upset anyone … In hindsight, the research benefits of the paper may not have justified all of this anxiety.”

Kramer adds that Facebook’s internal review practices have “come a long way” since 2012, when the experiment was run.

What did the paper itself find?

The study found that by manipulating the News Feeds displayed to 689,003 Facebook users, it could affect the content that those users posted to Facebook. More negative News Feeds led to more negative status messages, as more positive News Feeds led to positive statuses.

As far as the study was concerned, this meant that it had shown “that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness.” It touts that this emotional contagion can be achieved without “direct interaction between people” (because the unwitting subjects were only seeing each others’ News Feeds).

The researchers add that never during the experiment could they read individual users’ posts.

Two interesting things stuck out to me in the study.

The first? The effect the study documents is very small, as little as one-tenth of a percent of an observed change. That doesn’t mean it’s unimportant, though, as the authors add:

Given the massive scale of social networks such as Facebook, even small effects can have large aggregated consequences … After all, an effect size of d = 0.001 at Facebook’s scale is not negligible: In early 2013, this would have corresponded to hundreds of thousands of emotion expressions in status updates per day .

The second was this line:

Omitting emotional content reduced the amount of words the person subsequently produced, both when positivity was reduced (z = −4.78, P < 0.001) and when negativity was reduced (z = −7.219, P < 0.001).

In other words, when researchers reduced the appearance of either positive or negative sentiments in people’s News Feeds—when the feeds just got generally less emotional—those people stopped writing so many words on Facebook.

Make people’s feeds blander and they stop typing things into Facebook.

Was the study well designed? Perhaps not, says John Grohol, the founder of psychology website Psych Central. Grohol believes the study’s methods are hampered by the misuse of tools: Software better matched to analyze novels and essays, he says, is being applied toward the much shorter texts on social networks.

Let’s look at two hypothetical examples of why this is important. Here are two sample tweets (or status updates) that are not uncommon: “I am not happy.” “I am not having a great day.” An independent rater or judge would rate these two tweets as negative—they’re clearly expressing a negative emotion. That would be +2 on the negative scale, and 0 on the positive scale. But the LIWC 2007 tool doesn’t see it that way. Instead, it would rate these two tweets as scoring +2 for positive (because of the words “great” and “happy”) and +2 for negative (because of the word “not” in both texts).

“What the Facebook researchers clearly show,” writes Grohol, “is that they put too much faith in the tools they’re using without understanding—and discussing—the tools’ significant limitations.”

Did an institutional review board (IRB)—an independent ethics committee that vets research that involves humans—approve the experiment?

According to a Cornell University press statement on Monday, the experiment was conducted before an IRB was consulted. * Cornell professor Jeffrey Hancock—an author of the study—began working on the results after Facebook had conducted the experiment. Hancock only had access to results, says the release, so “Cornell University’s Institutional Review Board concluded that he was not directly engaged in human research and that no review by the Cornell Human Research Protection Program was required.”

In other words, the experiment had already been run, so its human subjects were beyond protecting. Assuming the researchers did not see users’ confidential data, the results of the experiment could be examined without further endangering any subjects.

Both Cornell and Facebook have been reluctant to provide details about the process beyond their respective prepared statements. One of the study's authors told The Atlantic on Monday that he’s been advised by the university not to speak to reporters.

By the time the study reached Susan Fiske, the Princeton University psychology professor who edited the study for publication, Cornell’s IRB members had already determined it outside of their purview.

Fiske had earlier conveyed to The Atlantic that the experiment was IRB-approved.

“I was concerned,” Fiske told The Atlantic on Saturday , “ until I queried the authors and they said their local institutional review board had approved it—and apparently on the grounds that Facebook apparently manipulates people’s News Feeds all the time.”

On Sunday, other reports raised questions about how an IRB was consulted. In a Facebook post on Sunday, study author Adam Kramer referenced only “internal review practices.” And a Forbes report that day, citing an unnamed source, claimed that Facebook only used an internal review.

When The Atlantic asked Fiske to clarify Sunday, she said the researchers’ “revision letter said they had Cornell IRB approval as a ‘pre-existing dataset’ presumably from FB, who seems to have reviewed it as well in some unspecified way … Under IRB regulations, pre-existing dataset would have been approved previously and someone is just analyzing data already collected, often by someone else.”

The mention of a “pre-existing dataset” here matters because, as Fiske explained in a follow-up email, “presumably the data already existed when they applied to Cornell IRB.” (She also noted: “I am not second-guessing the decision.”) Cornell’s Monday statement confirms this presumption.

On Saturday, Fiske said that she didn’t want the “the originality of the research” to be lost, but called the experiment “an open ethical question.”

“It’s ethically okay from the regulations perspective, but ethics are kind of social decisions. There’s not an absolute answer. And so the level of outrage that appears to be happening suggests that maybe it shouldn’t have been done … I’m still thinking about it and I’m a little creeped out, too.”

For more, check Atlantic editor Adrienne LaFrance’s full interview with Prof. Fiske .

From what we know now, were the experiment’s subjects able to provide informed consent?

In its ethical principles and code of conduct , the American Psychological Association (APA) defines informed consent like this:

When psychologists conduct research or provide assessment, therapy, counseling, or consulting services in person or via electronic transmission or other forms of communication, they obtain the informed consent of the individual or individuals using language that is reasonably understandable to that person or persons except when conducting such activities without consent is mandated by law or governmental regulation or as otherwise provided in this Ethics Code.

As mentioned above, the research seems to have been carried out under Facebook’s extensive terms of service. The company’s current data-use policy, which governs exactly how it may use users’ data, runs to more than 9,000 words and uses the word research twice. But as Forbes writer Kashmir Hill reported Monday night , the data-use policy in effect when the experiment was conducted never mentioned “research” at all— the word wasn’t inserted until May 2012 .

Never mind whether the current data-use policy constitutes “language that is reasonably understandable”: Under the January 2012 terms of service, did Facebook secure even shaky consent?

The APA has further guidelines for so-called deceptive research like this, where the real purpose of the research can’t be made available to participants during research. The last of these guidelines is:

Psychologists explain any deception that is an integral feature of the design and conduct of an experiment to participants as early as is feasible, preferably at the conclusion of their participation, but no later than at the conclusion of the data collection, and permit participants to withdraw their data.

At the end of the experiment, did Facebook tell the user-subjects that their News Feeds had been altered for the sake of research? If so, the study never mentions it.

James Grimmelmann, a law professor at the University of Maryland, believes the study did not secure informed consent. And he adds that Facebook fails even its own standards , which are lower than that of the academy:

A stronger reason is that even when Facebook manipulates our News Feeds to sell us things, it is supposed—legally and ethically—to meet certain minimal standards. Anything on Facebook that is actually an ad is labelled as such (even if not always clearly.) This study failed even that test, and for a particularly unappealing research goal: We wanted to see if we could make you feel bad without you noticing. We succeeded .

Did the U.S. government sponsor the research?

Cornell has now updated its June 10 story to say that the research received no external funding. Originally, Cornell had identified the Army Research Office, an agency within the U.S. Army that funds basic research in the military’s interest, as one of the funders of the experiment.

Do these kind of News Feed tweaks happen at other times?

At any one time, Facebook said last year, there were on average 1,500 pieces of content that could show up in your News Feed. The company uses an algorithm to determine what to display and what to hide.

It talks about this algorithm very rarely, but we know it’s very powerful. Last year, the company changed News Feed to surface more news stories. Websites like BuzzFeed and Upworthy proceeded to see record-busting numbers of visitors.

So we know it happens. Consider Fiske’s explanation of the research ethics here—the study was approved “on the grounds that Facebook apparently manipulates people’s News Feeds all the time.” And consider also that from this study alone Facebook knows at least one knob to tweak to get users to post more words on Facebook.

* This post originally stated that an institutional review board, or IRB, was consulted before the experiment took place regarding certain aspects of data collection.

Adrienne LaFrance contributed writing and reporting.

Send us an email

How to write a social media case study (with template)

Written by by Jenn Chen

Published on  October 10, 2019

Reading time  8 minutes

You’ve got a good number of social media clients under your belt and you feel fairly confident in your own service or product content marketing strategy. To attract new clients, you’ll tell them how you’ve tripled someone else’s engagement rates but how do they know this is true? Enter the case study.

Social media case studies are often used as part of a sales funnel: the potential client sees themselves in the case study and signs up because they want the same or better results. At Sprout, we use this strategy with our own case studies highlighting our customer’s successes.

Writing and publishing case studies is time intensive but straight forward. This guide will walk through how to create a social media case study for your business and highlight some examples.

What is a social media case study?

A case study is basically a long testimonial or review. Case studies commonly highlight what a business has achieved by using a social media service or strategy, and they illustrate how your company’s offerings help clients in a specific situation. Some case studies are written just to examine how a problem was solved or performance was improved from a general perspective. For this guide, we’ll be examining case studies that are focused on highlighting a company’s own products and services.

Case studies come in all content formats: long-form article, downloadable PDF, video and infographic. A single case study can be recycled into different formats as long as the information is still relevant.

At their core, case studies serve to inform a current or potential customer about a real-life scenario where your service or product was applied. There’s often a set date range for the campaign and accompanying, real-life statistics. The idea is to help the reader get a clearer understanding of how to use your product and why it could help.

Broad selling points like “our service will cut down your response time” are nice but a sentence like “After three months of using the software for responses, the company decreased their response time by 52%” works even better. It’s no longer a dream that you’ll help them decrease the response time because you already have with another company.

So now that you understand what a case study is, let’s get started on how to create one that’s effective and will help attract new clients.

How to write a social marketing case study

Writing an effective case study is all about the prep work. You’ve got to get all of the questions and set up ready so you can minimize lots of back and forth between you and the client.

1. Prepare your questions

Depending on how the case study will be presented and how familiar you are with the client to be featured, you may want to send some preliminary questions before the interview. It’s important to not only get permission from the company to use their logo, quotes and graphs but also to make sure they know they’ll be going into a public case study.

Your preliminary questions should cover background information about the company and ask about campaigns they are interested in discussing. Be sure to also identify which of your products and services they used. You can go into the details in the interview.

Once you receive the preliminary answers back, it’s time to prepare your questions for the interview. This is where you’ll get more information about how they used your products and how they contributed to the campaign’s success.

2. Interview

When you conduct your interview, think ahead on how you want it to be done. Whether it’s a phone call, video meeting or in-person meeting, you want to make sure it’s recorded. You can use tools like Google Meet, Zoom or UberConference to host and record calls (with your client’s permission, of course). This ensures that your quotes are accurate and you can play it back in case you miss any information. Tip: test out your recording device and process before the interview. You don’t want to go through the interview only to find out the recording didn’t save.

Ask open-ended questions to invite good quotes. You may need to use follow-up questions if the answers are too vague. Here are some examples.

  • Explain how you use (your product or service) in general and for the campaign. Please name specific features.
  • Describe how the feature helped your campaign achieve success.
  • What were the campaign outcomes?
  • What did you learn from the campaign?

Since we’re focused on creating a social media case study in this case, you can dive more deeply into social strategies and tactics too:

  • Tell me about your approach to social media. How has it changed over time, if at all? What role does it play for the organization? How do you use it? What are you hoping to achieve?
  • Are there specific social channels you prioritize? If so, why?
  • How do you make sure your social efforts are reaching the right audience?
  • What specific challenges do organizations like yours face when it comes to social?
  • How do you measure the ROI of using social ? Are there certain outcomes that prove the value of social for your organization? What metrics are you using to determine how effective social is for you?

As the conversation continues, you can ask more leading questions if you need to to make sure you get quotes that tie these strategic insights directly back to the services, products or strategies your company has delivered to the client to help them achieve success. Here are just a couple of examples.

  • Are there specific features that stick out to you as particularly helpful or especially beneficial for you and your objectives?
  • How are you using (product/service) to support your social strategy? What’s a typical day like for your team using it?

quote from sprout case study

The above quote was inserted into the Sprout Lake Metroparks case study . It’s an example of identifying a quote from an interview that helps make the impact of the product tangible in a client’s day to day.

At the end of the interview, be sure to thank the company and request relevant assets.

Afterwards, you may want to transcribe the interview to increase the ease of reviewing the material and writing the case study. You can DIY or use a paid service like Rev to speed up this part of the process.

3. Request assets and graphics

This is another important prep step because you want to make sure you get everything you need out of one request and avoid back and forth that takes up both you and your customer’s time. Be very clear on what you need and the file formats you need them in.

Some common assets include:

  • Logo in .png format
  • Logo guidelines so you know how to use them correctly
  • Links to social media posts that were used during the campaign
  • Headshots of people you interviewed
  • Social media analytics reports. Make sure you name them and provide the requested date range, so that if you’re using a tool like Sprout, clients know which one to export.

social media contests - instagram business report

4. Write the copy

Now that the information has been collected, it’s time to dissect it all and assemble it. At the end of this guide, we have an example outline template for you to follow. When writing a case study, you want to write to the audience that you’re trying to attract . In this case, it’ll be a potential customer that’s similar to the one you’re highlighting.

Use a mix of sentences and bullet points to attract different kinds of readers. The tone should be uplifting because you’re highlighting a success story. When identifying quotes to use, remove any fillers (“um”) and cut out unnecessary info.

pinterest case study

5. Pay attention to formatting

Sprout case study of Stoneacre Motor Group

And finally, depending on the content type, enlist the help of a graphic designer to make it look presentable. You may also want to include call-to-action buttons or links inside of your article. If you offer free trials, case studies are a great place to promote them.

Social media case study template

Writing a case study is a lot like writing a story or presenting a research paper (but less dry). This is a general outline to follow but you are welcome to enhance to fit your needs.

Headline Attention-grabbing and effective. Example: “ How Benefit turns cosmetics into connection using Sprout Social ” Summary A few sentences long with a basic overview of the brand’s story. Give the who, what, where, why and how. Which service and/or product did they use? Introduce the company Give background on who you’re highlighting. Include pertinent information like how big their social media team is, information about who you interviewed and how they run their social media. Describe the problem or campaign What were they trying to solve? Why was this a problem for them? What were the goals of the campaign? Present the solution and end results Describe what was done to achieve success. Include relevant social media statistics (graphics are encouraged). Conclusion Wrap it up with a reflection from the company spokesperson. How did they think the campaign went? What would they change to build on this success for the future? How did using the service compare to other services used in a similar situation?

Case studies are essential marketing and sales tools for any business that offer robust services or products. They help the customer reading them to picture their own company using the product in a similar fashion. Like a testimonial, words from the case study’s company carry more weight than sales points from the company.

When creating your first case study, keep in mind that preparation is the key to success. You want to find a company that is more than happy to sing your praises and share details about their social media campaign.

Once you’ve started developing case studies, find out the best ways to promote them alongside all your other content with our free social media content mix tool .

[Toolkit] Communications Toolkit to Safeguard Your Brand

Find Your Next Social Media Management Tool With This Scorecard

How to ladder up your brand’s social media maturity

3 Social media executives share what it takes to build a long-term career in social

  • Data Report
  • Social Media Content

The 2024 Content Benchmarks Report

Always up-to-date guide to social media image sizes

  • Social Media Strategy

The power of frontline employee engagement on social media

  • Marketing Disciplines

B2B content marketing: Ultimate strategy guide for 2024

  • Now on slide

Build and grow stronger relationships on social

Sprout Social helps you understand and reach your audience, engage your community and measure performance with the only all-in-one social media management platform built for connection.

  • Share full article

Advertisement

Supported by

Facebook and Cambridge Analytica: What You Need to Know as Fallout Widens

By Kevin Granville

  • March 19, 2018

facebook case study summary

Our report that a political firm hired by the Trump campaign acquired access to private data on millions of Facebook users has sparked new questions about how the social media giant protects user information.

Who collected all that data?

Cambridge Analytica, a political data firm hired by President Trump’s 2016 election campaign, gained access to private information on more than 50 million Facebook users. The firm offered tools that could identify the personalities of American voters and influence their behavior.

Cambridge has been largely funded by Robert Mercer, the wealthy Republican donor, and Stephen K. Bannon, a former adviser to the president who became an early board member and gave the firm its name. It has pitched its services to potential clients ranging from Mastercard and the New York Yankees to the Joint Chiefs of Staff.

On Monday, a British TV news report cast it in a harsher light, showing video of Cambridge Analytica executives offering to entrap politicians. A day later, as a furor grew, the company suspended its chief executive, Alexander Nix.

[Read more about how Cambridge Analytica and the Trump campaign became linked]

What kind of information was collected, and how was it acquired?

The data, a portion of which was viewed by The New York Times, included details on users’ identities, friend networks and “likes.” The idea was to map personality traits based on what people had liked on Facebook, and then use that information to target audiences with digital ads.

Researchers in 2014 asked users to take a personality survey and download an app, which scraped some private information from their profiles and those of their friends, activity that Facebook permitted at the time and has since banned.

The technique had been developed at Cambridge University’s Psychometrics Center. The center declined to work with Cambridge Analytica, but Aleksandr Kogan, a Russian-American psychology professor at the university, was willing.

Dr. Kogan built his own app and in June 2014 began harvesting data for Cambridge Analytica.

He ultimately provided over 50 million raw profiles to the firm, said Christopher Wylie, a data expert who oversaw Cambridge Analytica’s data harvesting. Only about 270,000 users — those who participated in the survey — had consented to having their data harvested, though they were all told that it was being used for academic use.

Facebook said no passwords or “sensitive pieces of information” had been taken, though information about a user’s location was available to Cambridge.

[Read more about the internal tension at the top of Facebook over the platform’s political exploitation]

So was Facebook hacked?

Facebook in recent days has insisted that what Cambridge did was not a data breach , because it routinely allows researchers to have access to user data for academic purposes — and users consent to this access when they create a Facebook account.

But Facebook prohibits this kind of data to be sold or transferred “to any ad network, data broker or other advertising or monetization-related service.” It says that was exactly what Dr. Kogan did, in providing the information to a political consulting firm.

Dr. Kogan declined to provide The Times with details of what had happened, citing nondisclosure agreements with Facebook and Cambridge Analytica.

Cambridge Analytica officials, after denying that they had obtained or used Facebook data, changed their story last week. In a statement to The Times, the company acknowledged that it had acquired the data, though it blamed Dr. Kogan for violating Facebook’s rules and said it had deleted the information as soon as it learned of the problem two years ago.

But the data, or at least copies, may still exist. The Times was recently able to view a set of raw data from the profiles Cambridge Analytica obtained.

What is Facebook doing in response?

The company issued a statement on Friday saying that in 2015, when it learned that Dr. Kogan’s research had been turned over to Cambridge Analytica, violating its terms of service, it removed Dr. Kogan’s app from the site. It said it had demanded and received certification that the data had been destroyed.

Facebook also said: “Several days ago, we received reports that, contrary to the certifications we were given, not all data was deleted. We are moving aggressively to determine the accuracy of these claims. If true, this is another unacceptable violation of trust and the commitments they made. We are suspending SCL/Cambridge Analytica, Wylie and Kogan from Facebook, pending further information.”

In a further step, Facebook said Monday that it had hired a digital forensics firm “to determine the accuracy of the claims that the Facebook data in question still exists.” It said that Cambridge Analytica had agreed to the review and that Dr. Kogan had given a verbal commitment, while Mr. Wylie “thus far has declined.”

[Read more about how to protect your data on Facebook]

What are others saying?

Facebook, already facing deep questions over the use of its platform by those seeking to spread Russian propaganda and fake news, is facing a renewed backlash after the news about Cambridge Analytica. Investors have not been pleased, sending shares of the company down more than 8 percent since Friday.

■The Federal Trade Commission said Tuesday it is investigating whether Facebook violated a 2011 consent agreement to keep users’ data private.

■ In Congress, Senators Amy Klobuchar, a Democrat from Minnesota, and John Kennedy, a Republican from Louisiana, have asked to hold a hearing on Facebook’s links to Cambridge Analytica. Republican leaders of the Senate Commerce Committee, led by John Thune of South Dakota, wrote a letter on Monday to Mark Zuckerberg, Facebook’s chief executive, demanding answers to questions about how the data was collected.

■ A British Parliament committee sent a letter to Mr. Zuckerberg asking him to appear before the panel to answer questions on Facebook’s ties to Cambridge Analytica.

■ The attorney general of Massachusetts, Maura Healey, announced on Saturday that her office was opening an investigation. “Massachusetts residents deserve answers immediately from Facebook and Cambridge Analytica,” she said in a Twitter post . Facebook’s lack of disclosure on the harvesting of data could violate privacy laws in Britain and several states.

A Guide to Digital Safety

A few simple changes can go a long way toward protecting yourself and your information online..

A data breach into your health information  can leave you feeling helpless. But there are steps you can take to limit the potential harm.

Don’t know where to start? These easy-to-follow tips  and best practices  will keep you safe with minimal effort.

Your email address has become a digital bread crumb that companies can use to link your activity across sites. Here’s how you can limit this .

Protect your most sensitive accounts by creating unique passwords and adding extra layers of verification .

There are stronger methods of two-factor authentication than text messages. Here are the pros and cons of each .

Do you store photos, videos and important documents in the cloud? Make sure you keep a copy of what you hold most dear .

Browser extensions are free add-ons that you can use to slow down or stop data collection. Here are a few to try.

TechRepublic

Account information.

facebook case study summary

Share with Your Friends

Facebook data privacy scandal: A cheat sheet

Your email has been sent

Image of TechRepublic Staff

A decade of apparent indifference for data privacy at Facebook has culminated in revelations that organizations harvested user data for targeted advertising, particularly political advertising, to apparent success. While the most well-known offender is Cambridge Analytica–the political consulting and strategic communication firm behind the pro-Brexit Leave EU campaign, as well as Donald Trump’s 2016 presidential campaign–other companies have likely used similar tactics to collect personal data of Facebook users.

TechRepublic’s cheat sheet about the Facebook data privacy scandal covers the ongoing controversy surrounding the illicit use of profile information. This article will be updated as more information about this developing story comes to the forefront. It is also available as a download, Cheat sheet: Facebook Data Privacy Scandal (free PDF) .

SEE: Navigating data privacy (ZDNet/TechRepublic special feature) | Download the free PDF version (TechRepublic)

What is the Facebook data privacy scandal?

The Facebook data privacy scandal centers around the collection of personally identifiable information of “ up to 87 million people ” by the political consulting and strategic communication firm Cambridge Analytica. That company–and others–were able to gain access to personal data of Facebook users due to the confluence of a variety of factors, broadly including inadequate safeguards against companies engaging in data harvesting, little to no oversight of developers by Facebook, developer abuse of the Facebook API, and users agreeing to overly broad terms and conditions.

SEE: Information security policy (TechRepublic Premium)

In the case of Cambridge Analytica, the company was able to harvest personally identifiable information through a personality quiz app called thisisyourdigitiallife, based on the OCEAN personality model. Information gathered via this app is useful in building a “psychographic” profile of users (the OCEAN acronym stands for openness, conscientiousness, extraversion, agreeableness, and neuroticism). Adding the app to your Facebook account to take the quiz gives the creator of the app access to profile information and user history for the user taking the quiz, as well as all of the friends that user has on Facebook. This data includes all of the items that users and their friends have liked on Facebook.

Researchers associated with Cambridge University claimed in a paper that it “can be used to automatically and accurately predict a range of highly sensitive personal attributes including: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender,” with a model developed by the researchers that uses a combination of dimensionality reduction and logistic/linear regression to infer this information about users.

The model–according to the researchers–is effective due to the relationship of likes to a given attribute. However, most likes are not explicitly indicative of their attributes. The researchers note that “less than 5% of users labeled as gay were connected with explicitly gay groups,” but that liking “Juicy Couture” and “Adam Lambert” are likes indicative of gay men, while “WWE” and “Being Confused After Waking Up From Naps” are likes indicative of straight men. Other such connections are peculiarly lateral, with “curly fries” being an indicator of high IQ, “sour candy” being an indicator of not smoking, and “Gene Wilder” being an indicator that the user’s parents had not separated by age 21.

SEE: Can Russian hackers be stopped? Here’s why it might take 20 years (TechRepublic cover story) | download the PDF version

Additional resources

  • How a Facebook app scraped millions of people’s personal data (CBS News)
  • Facebook reportedly thinks there’s no ‘expectation of privacy’ on social media (CNET)
  • Cambridge Analytica: ‘We know what you want before you want it’ (TechRepublic)
  • Average US citizen had personal information stolen at least 4 times in 2019 (TechRepublic)
  • Facebook: We’ll pay you to track down apps that misuse your data (ZDNet)
  • Most consumers do not trust big tech with their privacy (TechRepublic)
  • Facebook asks permission to use personal data in Brazil (ZDNet)

What is the timeline of the Facebook data privacy scandal?

Facebook has more than a decade-long track record of incidents highlighting inadequate and insufficient measures to protect data privacy. While the severity of these individual cases varies, the sequence of repeated failures paints a larger picture of systemic problems.

SEE: All TechRepublic cheat sheets and smart person’s guides

In 2005, researchers at MIT created a script that downloaded publicly posted information of more than 70,000 users from four schools. (Facebook only began to allow search engines to crawl profiles in September 2007.)

In 2007, activities that users engaged in on other websites was automatically added to Facebook user profiles as part of Beacon, one of Facebook’s first attempts to monetize user profiles. As an example, Beacon indicated on the Facebook News Feed the titles of videos that users rented from Blockbuster Video, which was a violation of the Video Privacy Protection Act . A class action suit was filed, for which Facebook paid $9.5 million to a fund for privacy and security as part of a settlement agreement.

SEE: The Brexit dilemma: Will London’s start-ups stay or go? (TechRepublic cover story)

In 2011, following an FTC investigation, the company entered into a consent decree, promising to address concerns about how user data was tracked and shared. That investigation was prompted by an incident in December 2009 in which information thought private by users was being shared publicly, according to contemporaneous reporting by The New York Times .

In 2013, Facebook disclosed details of a bug that exposed the personal details of six million accounts over approximately a year . When users downloaded their own Facebook history, that user would obtain in the same action not just their own address book, but also the email addresses and phone numbers of their friends that other people had stored in their address books. The data that Facebook exposed had not been given to Facebook by users to begin with–it had been vacuumed from the contact lists of other Facebook users who happen to know that person. This phenomenon has since been described as “shadow profiles.”

The Cambridge Analytica portion of the data privacy scandal starts in February 2014. A spate of reviews on the Turkopticon website–a third-party review website for users of Amazon’s Mechanical Turk–detail a task requested by Aleksandr Kogan asking users to complete a survey in exchange for money. The survey required users to add the thisisyourdigitiallife app to their Facebook account, which is in violation of Mechanical Turk’s terms of service . One review quotes the request as requiring users to “provide our app access to your Facebook so we can download some of your data–some demographic data, your likes, your friends list, whether your friends know one another, and some of your private messages.”

In December 2015, Facebook learned for the first time that the data set Kogan generated with the app was shared with Cambridge Analytica. Facebook founder and CEO Mark Zuckerberg claims “we immediately banned Kogan’s app from our platform, and demanded that Kogan and Cambridge Analytica formally certify that they had deleted all improperly acquired data. They provided these certifications.”

According to Cambridge Analytica, the company took legal action in August 2016 against GSR (Kogan) for licensing “illegally acquired data” to the company, with a settlement reached that November.

On March 17, 2018, an exposé was published by The Guardian and The New York Times , initially reporting that 50 million Facebook profiles were harvested by Cambridge Analytica; the figure was later revised to “up to 87 million” profiles. The exposé relies on information provided by Christopher Wylie, a former employee of SCL Elections and Global Science Research, the creator of the thisisyourdigitiallife app. Wylie claimed that the data from that app was sold to Cambridge Analytica, which used the data to develop “psychographic” profiles of users, and target users with pro-Trump advertising, a claim that Cambridge Analytica denied.

On March 16, 2018, Facebook threatened to sue The Guardian over publication of the story, according to a tweet by Guardian reporter Carole Cadwalladr . Campbell Brown, a former CNN journalist who now works as head of news partnerships at Facebook, said it was “not our wisest move,” adding “If it were me I would have probably not threatened to sue The Guardian.” Similarly, Cambridge Analytica threatened to sue The Guardian for defamation .

On March 20, 2018, the FTC opened an investigation to determine if Facebook had violated the terms of the settlement from the 2011 investigation.

In April 2018, reports indicated that Facebook granted Zuckerberg and other high ranking executives powers over controlling personal information on a platform that is not available to normal users. Messages from Zuckerberg sent to other users were remotely deleted from users’ inboxes, which the company claimed was part of a corporate security measure following the 2014 Sony Pictures hack . Facebook subsequently announced plans to make available the “unsend” capability “to all users in several months,” and that Zuckerberg will be unable to unsend messages until such time that feature rolls out. Facebook added the feature 10 months later , on February 6, 2019. The public feature permits users to delete messages up to 10 minutes after the messages were sent. In the controversy prompting this feature to be added, Zuckerberg deleted messages months after they were sent.

On April 4, 2018, The Washington Post reported that Facebook announced “malicious actors” abused the search function to gather public profile information of “most of its 2 billion users worldwide.”

In a CBS News/YouGov poll published on April 10, 2018, 61% of Americans said Congress should do more to regulate social media and tech companies. This sentiment was echoed in a CBS News interview with Box CEO Aaron Levie and YML CEO Ashish Toshniwal who called on Congress to regulate Facebook. According to Levie, “There are so many examples where we don’t have modern ways of either regulating, controlling, or putting the right protections in place in the internet age. And this is a fundamental issue that, that we’re gonna have to grapple with as an industry for the next decade.”

On April 18, 2018, Facebook updated its privacy policy .

On May 2, 2018, SCL Group, which owns Cambridge Analytica, was dissolved. In a press release , the company indicated that “the siege of media coverage has driven away virtually all of the Company’s customers and suppliers.”

On May 15, 2018, The New York Times reported that Cambridge Analytica is being investigated by the FBI and the Justice Department. A source indicated to CBS News that prosecutors are focusing on potential financial crimes.

On May 16, 2018, Christopher Wylie testified before the Senate Judiciary Committee . Among other things, Wylie noted that Cambridge Analytica, under the direction of Steve Bannon, sought to “exploit certain vulnerabilities in certain segments to send them information that will remove them from the public forum, and feed them conspiracies and they’ll never see mainstream media.” Wylie also noted that the company targeted people with “characteristics that would lead them to vote for the Democratic party, particularly African American voters.”

On June 3, 2018, a report in The New York Times indicated that Facebook had maintained data-sharing partnerships with mobile device manufacturers, specifically naming Apple, Amazon, BlackBerry, Microsoft, and Samsung. Under the terms of this personal information sharing, device manufacturers were able to gather information about users in order to deliver “the Facebook experience,” the Times quotes a Facebook official as saying. Additionally, the report indicates that this access allowed device manufacturers to obtain data about a user’s Facebook friends, even if those friends had configured their privacy settings to deny information sharing with third parties.

The same day, Facebook issued a rebuttal to the Times report indicating that the partnerships were conceived because “the demand for Facebook outpaced our ability to build versions of the product that worked on every phone or operating system,” at a time when the smartphone market included BlackBerry’s BB10 and Windows Phone operating systems, among others. Facebook claimed that “contrary to claims by the New York Times, friends’ information, like photos, was only accessible on devices when people made a decision to share their information with those friends. We are not aware of any abuse by these companies.” The distinction being made is partially semantic, as Facebook does not consider these partnerships a third party in this case. Facebook noted that changes to the platform made in April began “winding down” access to these APIs, and that 22 of the partnerships had already been ended.

On June 5, 2018, the The Washington Post and The New York Times reported that the Chinese device manufacturers Huawei, Lenovo, Oppo, and TCL were granted access to user data under this program. Huawei, along with ZTE, are facing scrutiny from the US government on unsubstantiated accusations that products from these companies pose a national security risk .

On July 2, 2018, The Washington Post reported that the US Securities and Exchange Commission, Federal Trade Commission, and Federal Bureau of Investigation have joined the Department of Justice inquiry into the Facebook/Cambridge Analytica data scandal. In a statement to CNET , Facebook indicated that “We’ve provided public testimony, answered questions, and pledged to continue our assistance as their work continues.” On July 11th, the Wall Street Journal reported that the SEC is separately investigating if Facebook adequately warned investors in a timely manner about the possible misuse and improper collection of user data. The same day, the UK assessed a £500,000 fine to Facebook , the maximum permitted by law, over its role in the data scandal. The UK’s Information Commissioner’s Office is also preparing to launch a criminal probe into SCL Elections over their involvement in the scandal.

On July 3, 2018, Facebook acknowledged a “bug” unblocked people that users has blocked between May 29 and June 5.

On July 12, 2018, a CNBC report indicated that a privacy loophole was discovered and closed. A Chrome plug-in intended for marketing research called Grouply.io allowed users to access the list of members for private Facebook groups. Congress sent a letter to Zuckerberg on February 19, 2019 demanding answers about the data leak, stating in part that “labeling these groups as closed or anonymous potentially misled Facebook users into joining these groups and revealing more personal information than they otherwise would have,” and “Facebook may have failed to properly notify group members that their personal health information may have been accessed by health insurance companies and online bullies, among others.”

Fallout from a confluence of factors in the Facebook data privacy scandal has come to bear in the last week of July 2018. On July 25th, Facebook announced that daily active user counts have fallen in Europe, and growth has stagnated in the US and Canada. The following day, Facebook suffered the worst single-day market value decrease for a public company in the US, dropping $120 billion , or 19%. On the July 28th, Reuters reported that shareholders are suing Facebook, Zuckerberg, and CFO David Wehner for “making misleading statements about or failing to disclose slowing revenue growth, falling operating margins, and declines in active users.”

On August 22, 2018, Facebook removed Facebook-owned security app Onavo from the App Store , for violating privacy rules. Data collected through the Onavo app is shared with Facebook.

In testimony before the Senate, on September 5, 2018, COO Sheryl Sandberg conceded that the company “[was] too slow to spot this and too slow to act” on privacy protections. Sandberg, and Twitter CEO Jack Dorsey faced questions focusing on user privacy, election interference, and political censorship. Senator Mark Warner of Virginia even said that, “The era of the wild west in social media is coming to an end,” which seems to indicate coming legislation.

On September 6, 2018, a spokesperson indicated that Joseph Chancellor was no longer employed by Facebook . Chancellor was a co-director of Global Science Research, the firm which improperly provided user data to Cambridge Analytica. An internal investigation was launched in March in part to determine his involvement. No statement was released indicating the result of that investigation.

On September 7, 2018, Zuckerberg stated in a post that fixing issues such as “defending against election interference by nation states, protecting our community from abuse and harm, or making sure people have control of their information and are comfortable with how it’s used,” is a process which “will extend through 2019.”

On September 26, 2018, WhatsApp co-founder Brian Acton stated in an interview with Forbes that “I sold my users’ privacy” as a result of the messaging app being sold to Facebook in 2014 for $22 billion.

On September 28, 2018, Facebook disclosed details of a security breach which affected 50 million users . The vulnerability originated from the “view as” feature which can be used to let users see what their profiles look like to other people. Attackers devised a way to export “access tokens,” which could be used to gain control of other users’ accounts .

A CNET report published on October 5, 2018, details the existence of an “ Internet Bill of Rights ” drafted by Rep. Ro Khanna (D-CA). The bill is likely to be introduced in the event the Democrats regain control of the House of Representatives in the 2018 elections. In a statement, Khanna noted that “As our lives and the economy are more tied to the internet, it is essential to provide Americans with basic protections online.”

On October 11, 2018, Facebook deleted over 800 pages and accounts in advance of the 2018 elections for violating rules against spam and “inauthentic behavior.” The same day, it disabled accounts for a Russian firm called “Social Data Hub,” which claimed to sell scraped user data. A Reuters report indicates that Facebook will ban false information about voting in the midterm elections.

On October 16, 2018, rules requiring public disclosure of who pays for political advertising on Facebook, as well as identity verification of users paying for political advertising, were extended to the UK . The rules were first rolled out in the US in May.

On October 25, 2018, Facebook was fined £500,000 by the UK’s Information Commissioner’s Office for their role in the Cambridge Analytica scandal. The fine is the maximum amount permitted by the Data Protection Act 1998. The ICO indicated that the fine was final. A Facebook spokesperson told ZDNet that the company “respectfully disagreed,” and has filed for appeal .

The same day, Vice published a report indicating that Facebook’s advertiser disclosure policy was trivial to abuse. Reporters from Vice submitted advertisements for approval attributed to Mike Pence, DNC Chairman Tom Perez, and Islamic State, which were approved by Facebook. Further, the contents of the advertisements were copied from Russian advertisements. A spokesperson for Facebook confirmed to Vice that the copied content does not violate rules, though the false attribution does. According to Vice, the only denied submission was attributed to Hillary Clinton.

On October 30, 2018, Vice published a second report in which it claimed that it successfully applied to purchase advertisements attributed to all 100 sitting US Senators, indicating that Facebook had yet to fix the problem reported in the previous week. According to Vice, the only denied submission in this test was attributed to Mark Zuckerberg.

On November 14, 2018, the New York Times published an exposé on the Facebook data privacy scandal, citing interviews of more than 50 people, including current and former Facebook executives and employees. In the exposé, the Times reports:

  • In the Spring of 2016, a security expert employed by Facebook informed Chief Security Officer Alex Stamos of Russian hackers “probing Facebook accounts for people connected to the presidential campaigns,” which Stamos, in turn, informed general counsel Colin Stretch.
  • A group called “Project P” was assembled by Zuckerberg and Sandberg to study false news on Facebook. By January 2017, this group “pressed to issue a public paper” about their findings, but was stopped by board members and Facebook vice president of global public policy Joel Kaplan, who had formerly worked in former US President George W. Bush’s administration.
  • In Spring and Summer of 2017, Facebook was “publicly claiming there had been no Russian effort of any significance on Facebook,” despite an ongoing investigation into the extent of Russian involvement in the election.
  • Sandberg “and deputies” insisted that the post drafted by Stamos to publicly acknowledge Russian involvement for the first time be made “less specific” before publication.
  • In October 2017, Facebook expanded their engagement with Republican-linked firm Definers Public Affairs to discredit “activist protesters.” That firm worked to link people critical of Facebook to liberal philanthropist George Soros , and “[lobbied] a Jewish civil rights group to cast some criticism of the company as anti-Semitic.”
  • Following comments critical of Facebook by Apple CEO Tim Cook , a spate of articles critical of Apple and Google began appearing on NTK Network, an organization which shares an office and staff with Definers. Other articles appeared on the website downplaying the Russians’ use of Facebook.

On November 15, 2018, Facebook announced it had terminated its relationship with Definers Public Affairs, though it disputed that either Zuckerberg or Sandberg was aware of the “specific work being done.” Further, a Facebook spokesperson indicated “It is wrong to suggest that we have ever asked Definers to pay for or write articles on Facebook’s behalf, or communicate anything untrue.”

On November 22, 2018, Sandberg acknowledged that work produced by Definers “was incorporated into materials presented to me and I received a small number of emails where Definers was referenced.”

On November 25, 2018, the founder of Six4Three, on a business trip to London, was compelled by Parliament to hand over documents relating to Facebook . Six4Three obtained these documents during the discovery process relating to an app developed by the startup that used image recognition to identify photos of women in bikinis shared on Facebook users’ friends’ pages. Reports indicate that Parliament sent an official to the founder’s hotel with a warning that noncompliance would result in possible fines or imprisonment. Despite the warning, the founder of the startup remained noncompliant, prompting him to be escorted to Parliament, where he turned over the documents.

A report in the New York Times published on November 29, 2018, indicates that Sheryl Sandberg personally asked Facebook communications staff in January to “research George Soros’s financial interests in the wake of his high-profile attacks on tech companies.”

On December 5, 2018, documents obtained in the probe of Six4Three were released by Parliament . Damian Collins, the MP who issued the order compelling the handover of the documents in November, highlighted six key points from the documents:

  • Facebook entered into whitelisting agreements with Lyft, Airbnb, Bumble, and Netflix, among others, allowing those groups full access to friends data after Graph API v1 was discontinued. Collins indicates “It is not clear that there was any user consent for this, nor how Facebook decided which companies should be whitelisted or not.”
  • According to Collins, “increasing revenues from major app developers was one of the key drivers behind the Platform 3.0 changes at Facebook. The idea of linking access to friends data to the financial value of the developers’ relationship with Facebook is a recurring feature of the documents.”
  • Data reciprocity between Facebook and app developers was a central focus for the release of Platform v3, with Zuckerberg discussing charging developers for access to API access for friend lists.
  • Internal discussions of changes to the Facebook Android app acknowledge that requesting permissions to collect calls and texts sent by the user would be controversial, with one project manager stating it was “a pretty high-risk thing to do from a PR perspective.”
  • Facebook used data collected through Onavo, a VPN service the company acquired in 2013, to survey the use of mobile apps on smartphones. According to Collins, this occurred “apparently without [users’] knowledge,” and was used by Facebook to determine “which companies to acquire, and which to treat as a threat.”
  • Collins contends that “the files show evidence of Facebook taking aggressive positions against apps, with the consequence that denying them access to data led to the failure of that business.” Documents disclosed specifically indicate Facebook revoked API access to video sharing service Vine.

In a statement , Facebook claimed, “Six4Three… cherrypicked these documents from years ago.” Zuckerberg responded separately to the public disclosure on Facebook, acknowledging, “Like any organization, we had a lot of internal discussion and people raised different ideas.” He called the Facebook scrutiny “healthy given the vast number of people who use our services,” but said it shouldn’t “misrepresent our actions or motives.”

On December 14, 2018, a vulnerability was disclosed in the Facebook Photo API that existed between September 13-25, 2018, exposing private photos of 6.8 million users. The Photo API bug affected people who use Facebook to log in to third-party services.

On December 18, 2018, The New York Times reported on special data sharing agreements that “[exempted] business partners from its usual privacy rules, naming Microsoft’s Bing search engine, Netflix, Spotify, Amazon, and Yahoo as partners in the report. Partners were capable of accessing data including friend lists and private messages, “despite public statements it had stopped that type of sharing years earlier.” Facebook claimed the data sharing was about “helping people,” and that this was not done without user consent.

On January 17, 2019, Facebook disclosed that it removed hundreds of pages and accounts controlled by Russian propaganda organization Sputnik, including accounts posing as politicians from primarily Eastern European countries.

On January 29, 2019, a TechCrunch report uncovered the “Facebook Research” program , which paid users aged 13 to 35 to receive up to $20 per month to install a VPN application similar to Onavo that allowed Facebook to gather practically all information about how phones were used. On iOS, this was distributed using Apple’s Developer Enterprise Program, for which Apple briefly revoked Facebook’s certificate as a result of the controversy .

Facebook initially indicated that “less than 5% of the people who chose to participate in this market research program were teens,” and on March 1, 2019 amended the statement to “about 18 percent.”

On February 7, 2019, the German antitrust office ruled that Facebook must obtain consent before collecting data on non-Facebook members, following a three-year investigation.

On February 20, 2019, Facebook added new location controls to its Android app that allows users to limit background data collection when the app is not in use .

The same day, ZDNet reported that Microsoft’s Edge browser contained a secret whitelist allowing Facebook to run Adobe Flash, bypassing the click-to-play policy that other websites are subject to for Flash objects over 398×298 pixels. The whitelist was removed in the February 2019 Patch Tuesday update.

On March 6, 2019, Zuckerberg announced a plan to rebuild services around encryption and privacy , “over the next few years.” As part of these changes, Facebook will make messages between Facebook, Instagram, and WhatsApp interoperable. Former Microsoft executive Steven Sinofsky –who was fired after the poor reception of Windows 8–called the move “fantastic,” comparing it to Microsoft’s Trustworthy Computing initiative in 2002.

CNET and CBS News Senior Producer Dan Patterson noted on CBSN that Facebook can benefit from this consolidation by making the messaging platforms cheaper to operate, as well as profiting from users sending money through the messaging platform, in a business model similar to Venmo.

On March 21, 2019, Facebook disclosed a lapse in security that resulted in hundreds of millions of passwords being stored in plain text, affecting users of Facebook, Facebook Lite, and Instagram. Facebook claimed that “these passwords were never visible to anyone outside of Facebook and we have found no evidence to date that anyone internally abused or improperly accessed them.”

Though Facebook’s post does not provide specifics, a report by veteran security reporter Brian Krebs claimed “between 200 million and 600 million” users were affected, and that “more than 20,000 Facebook employees” would have had access.

On March 22, 2019, a court filing by the attorney general of Washington DC alleged that Facebook knew about the Cambridge Analytica scandal months prior to the first public reports in December 2015. Facebook claimed that employees knew of rumors relating to Cambridge Analytica, but the claims relate to a “different incident” than the main scandal, and insisted that the company did not mislead anyone about the timeline of the scandal.

Facebook is seeking to have the case filed in Washington DC dismissed, as well as to seal a document filed in that case.

On March 31, 2019, The Washington Post published an op-ed by Zuckerberg calling for governments and regulators to take a “more active role” in regulating the internet. Shortly after, Facebook introduced a feature that explains why content is shown to users on their news feeds .

On April 3, 2019, over 540 million Facebook-related records were found on two improperly protected AWS servers . The data was collected by Cultura Colectiva, a Mexico-based online media platform, using Facebook APIs. Amazon deactivated the associated account at Facebook’s request.

On April 15, 2019, it was discovered that Oculus, a company owned by Facebook, shipped VR headsets with internal etchings including text such as “ Big Brother is Watching .”

On April 18, 2019, Facebook disclosed the “unintentional” harvesting of email contacts belonging to approximately 1.5 million users over the course of three years. Affected users were asked to provide email address credentials to verify their identity.

On April 30, 2019, at Facebook’s F8 developer conference , the company unveiled plans to overhaul Messenger and re-orient Facebook to prioritize Groups instead of the timeline view, with Zuckerberg declaring “The future is private.”

On May 9, 2019, Facebook co-founder Chris Hughes called for Facebook to be broken up by government regulators, in an editorial in The New York Times. Hughes, who left the company in 2007, cited concerns that Zuckerberg has surrounded himself with people who do not challenge him . “We are a nation with a tradition of reining in monopolies, no matter how well-intentioned the leaders of these companies may be. Mark’s power is unprecedented and un-American,” Hughes said.

Proponents of a Facebook breakup typically point to unwinding the social network’s purchase of Instagram and WhatsApp.

Zuckerberg dismissed Hughes’ appeal for a breakup in comments to France 2, stating in part that “If what you care about is democracy and elections, then you want a company like us to invest billions of dollars a year, like we are, in building up really advanced tools to fight election interference.”

On May 24, 2019, a report from Motherboard claimed “multiple” staff members of Snapchat used internal tools to spy on users .

On July 8, 2019, Apple co-founder Steve Wozniak warned users to get off of Facebook .

On July 18, 2019, lawmakers in a House Committee on Financial Services hearing expressed mistrust of Facebook’s Libra cryptocurrency plan due to its “pattern of failing to keep consumer data private.” Lawmakers had previously issued a letter to Facebook requesting the company pause development of the project.

On July 24, 2019, the FTC announced a $5 billion settlement with Facebook over user privacy violations. Facebook agreed to conduct an overhaul of its consumer privacy practices as part of the settlement. Access to friend data by Sony and Facebook was “immediately” restricted as part of this settlement, according to CNET. Separately, the FTC settled with Aleksandr Kogan and former Cambridge Analytica CEO Alexander Nix , “restricting how they conduct any business in the future, and requiring them to delete or destroy any personal information they collected.” The FTC announced a lawsuit against Cambridge Analytica the same day.

Also on July 24, 2019, Netflix released “The Great Hack,” a documentary about the Cambridge Analytica scandal .

In early July, 2020, Facebook admitted to sharing user data with an estimated 5,000 third-party developers after it access to that data was supposed to expire.

Zuckerberg testified before Congress again on July 29, 2020, as part of an antitrust hearing that included Amazon’s Jeff Bezos, Apple’s Tim Cook, and Google’s Sundar Pichai . The hearing didn’t touch on Facebook’s data privacy scandal, and was instead focused on Facebook’s purchase of Instagram and WhatsApp , as well as its treatment of other competing services.

  • Facebook knew of illicit user profile harvesting for 2 years, never acted (CBS News)
  • Facebook’s FTC consent decree deal: What you need to know (CNET)
  • Australia’s Facebook investigation expected to take at least 8 months (ZDNet)
  • Election tech: The truth about Cambridge Analytica’s political big data (TechRepublic)
  • Google sued by ACCC for allegedly linking data for ads without consent (ZDNet)
  • Midterm elections, social media and hacking: What you need to know (CNET)
  • Critical flaw revealed in Facebook Fizz TLS project (ZDNet)
  • CCPA: What California’s new privacy law means for Facebook, Twitter users (CNET)

What are the key companies involved in the Facebook data privacy scandal?

In addition to Facebook, these are the companies connected to this data privacy story.

SCL Group (formerly Strategic Communication Laboratories) is at the center of the privacy scandal, though it has operated primarily through subsidiaries. Nominally, SCL was a behavioral research/strategic communication company based in the UK. The company was dissolved on May 1, 2018.

Cambridge Analytica and SCL USA are offshoots of SCL Group, primarily operating in the US. Registration documentation indicates the pair formally came into existence in 2013. As with SCL Group, the pair were dissolved on May 1, 2018.

Global Science Research was a market research firm based in the UK from 2014 to 2017. It was the originator of the thisisyourdigitiallife app. The personal data derived from the app (if not the app itself) was sold to Cambridge Analytica for use in campaign messaging.

Emerdata is the functional successor to SCL and Cambridge Analytica. It was founded in August 2017, with registration documents listing several people associated with SCL and Cambridge Analytica, as well as the same address as that of SCL Group’s London headquarters.

AggregateIQ is a Canadian consulting and technology company founded in 2013. The company produced Ripon, the software platform for Cambridge Analytica’s political campaign work, which leaked publicly after being discovered in an unprotected GitLab bucket .

Cubeyou is a US-based data analytics firm that also operated surveys on Facebook, and worked with Cambridge University from 2013 to 2015. It was suspended from Facebook in April 2018 following a CNBC report .

Six4Three was a US-based startup that created an app that used image recognition to identify photos of women in bikinis shared on Facebook users’ friends’ pages. The company sued Facebook in April 2015, when the app became inoperable after access to this data was revoked when the original version of Facebook’s Graph API was discontinued .

Onavo is an analytics company that develops mobile apps. They created Onavo Extend and Onavo Protect, which are VPN services for data protection and security, respectively. Facebook purchased the company in October 2013 . Data from Onavo is used by Facebook to track usage of non-Facebook apps on smartphones .

The Internet Research Agency is a St. Petersburg-based organization with ties to Russian intelligence services. The organization engages in politically-charged manipulation across English-language social media, including Facebook.

  • If your organization advertises on Facebook, beware of these new limitations (TechRepublic)
  • Data breach exposes Cambridge Analytica’s data mining tools (ZDNet)
  • Was your business’s Twitter feed sold to Cambridge Analytica? (TechRepublic)
  • US special counsel indicts 13 members of Russia’s election meddling troll farm (ZDNet)

Who are the key people involved in the Facebook data privacy scandal?

Nigel Oakes is the founder of SCL Group, the parent company of Cambridge Analytica. A report from Buzzfeed News unearthed a quote from 1992 in which Oakes stated, “We use the same techniques as Aristotle and Hitler. … We appeal to people on an emotional level to get them to agree on a functional level.”

Alexander Nix was the CEO of Cambridge Analytica and a director of SCL Group. He was suspended following reports detailing a video in which Nix claimed the company “offered bribes to smear opponents as corrupt,” and that it “campaigned secretly in elections… through front companies or using subcontractors.”

Robert Mercer is a conservative activist, computer scientist, and a co-founder of Cambridge Analytica. A New York Times report indicates that Mercer invested $15 million in the company. His daughters Jennifer Mercer and Rebekah Anne Mercer serve as directors of Emerdata.

Christopher Wylie is the former director of research at Cambridge Analytica. He provided information to The Guardian for its exposé of the Facebook data privacy scandal. He has since testified before committees in the US and UK about Cambridge Analytica’s involvement in this scandal.

Steve Bannon is a co-founder of Cambridge Analytica, as well as a founding member and former executive chairman of Breitbart News, an alt-right news outlet. Breitbart News has reportedly received funding from the Mercer family as far back as 2010. Bannon left Breitbart in January 2018. According to Christopher Wylie, Bannon is responsible for testing phrases such as “ drain the swamp ” at Cambridge Analytica, which were used extensively on Breitbart.

Aleksandr Kogan is a Senior Research Associate at Cambridge University and co-founder of Global Science Research, which created the data harvesting thisisyourdigitiallife app. He worked as a researcher and consultant for Facebook in 2013 and 2015. Kogan also received Russian government grants and is an associate professor at St. Petersburg State University, though he claims this is an honorary role .

Joseph Chancellor was a co-director of Global Science Research, which created the data harvesting thisisyourdigitiallife app. Around November 2015, he was hired by Facebook as a “quantitative social psychologist.” A spokesperson indicated on September 6, 2018, that he was no longer employed by Facebook.

Michal Kosinski , David Stillwell , and Thore Graepel are the researchers who proposed and developed the model to “psychometrically” analyze users based on their Facebook likes. At the time this model was published, Kosinski and Stillwell were affiliated with Cambridge University, while Graepel was affiliated with the Cambridge-based Microsoft Research. (None have an association with Cambridge Analytica, according to Cambridge University .)

Mark Zuckerberg is the founder and CEO of Facebook. He founded the website in 2004 from his dorm room at Harvard.

Sheryl Sandberg is the COO of Facebook. She left Google to join the company in March 2008. She became the eighth member of the company’s board of directors in 2012 and is the first woman in that role.

Damian Collins is a Conservative Party politician based in the United Kingdom. He currently serves as the Chair of the House of Commons Culture, Media and Sport Select Committee. Collins is responsible for issuing orders to seize documents from the American founder of Six4Three while he was traveling in London, and releasing those documents publicly.

Chris Hughes is one of four Facebook co-founders, who originally took on beta testing and feedback for the website, until leaving in 2007. Hughes is the first to call for Facebook to be broken up by regulators.

  • Facebook investigates employee’s ties to Cambridge Analytica (CBS News)
  • Aleksandr Kogan: The link between Cambridge Analytica and Facebook (CBS News)
  • Video: Cambridge Analytica shuts down following data scandal (CBS News)

How have Facebook and Mark Zuckerberg responded to the data privacy scandal?

Each time Facebook finds itself embroiled in a privacy scandal, the general playbook seems to be the same: Mark Zuckerberg delivers an apology, with oft-recycled lines, such as “this was a big mistake,” or “I know we can do better.” Despite repeated controversies regarding Facebook’s handling of personal data, it has continued to gain new users. This is by design–founding president Sean Parker indicated at an Axios conference in November 2017 that the first step of building Facebook features was “How do we consume as much of your time and conscious attention as possible?” Parker also likened the design of Facebook to “exploiting a vulnerability in human psychology.”

On March 16, 2018, Facebook announced that SCL and Cambridge Analytica had been banned from the platform. The announcement indicated, correctly, that “Kogan gained access to this information in a legitimate way and through the proper channels that governed all developers on Facebook at that time,” and passing the information to a third party was against the platform policies.

The following day, the announcement was amended to state:

The claim that this is a data breach is completely false. Aleksandr Kogan requested and gained access to information from users who chose to sign up to his app, and everyone involved gave their consent. People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked.

On March 21, 2018, Mark Zuckerberg posted his first public statement about the issue, stating in part that:

“We have a responsibility to protect your data, and if we can’t then we don’t deserve to serve you. I’ve been working to understand exactly what happened and how to make sure this doesn’t happen again.”

On March 26, 2018, Facebook placed full-page ads stating : “This was a breach of trust, and I’m sorry we didn’t do more at the time. We’re now taking steps to ensure this doesn’t happen again,” in The New York Times, The Washington Post, and The Wall Street Journal, as well as The Observer, The Sunday Times, Mail on Sunday, Sunday Mirror, Sunday Express, and Sunday Telegraph in the UK.

In a blog post on April 4, 2018, Facebook announced a series of changes to data handling practices and API access capabilities. Foremost among these include limiting the Events API, which is no longer able to access the guest list or wall posts. Additionally, Facebook removed the ability to search for users by phone number or email address and made changes to the account recovery process to fight scraping.

On April 10, 2018, and April 11, 2018, Mark Zuckerberg testified before Congress. Details about his testimony are in the next section of this article.

On April 10, 2018, Facebook announced the launch of its data abuse bug bounty program. While Facebook has an existing security bug bounty program, this is targeted specifically to prevent malicious users from engaging in data harvesting. There is no limit to how much Facebook could potentially pay in a bounty, though to date the highest amount the company has paid is $40,000 for a security bug.

On May 14, 2018, “around 200” apps were banned from Facebook as part of an investigation into if companies have abused APIs to harvest personal information. The company declined to provide a list of offending apps.

On May 22, 2018, Mark Zuckerberg testified, briefly, before the European Parliament about the data privacy scandal and Cambridge Analytica. The format of the testimony has been the subject of derision, as all of the questions were posed to Zuckerberg before he answered. Guy Verhofstadt, an EU Parliament member representing Belgium, said , “I asked you six ‘yes’ and ‘no’ questions, and I got not a single answer.”

What did Mark Zuckerberg say in his testimony to Congress?

In his Senate testimony on April 10, 2018, Zuckerberg reiterated his apology, stating that “We didn’t take a broad enough view of our responsibility, and that was a big mistake. And it was my mistake. And I’m sorry. I started Facebook, I run it, and I’m responsible for what happens here,” adding in a response to Sen. John Thune that “we try not to make the same mistake multiple times.. in general, a lot of the mistakes are around how people connect to each other, just because of the nature of the service.”

Sen. Amy Klobuchar asked if Facebook had determined whether Cambridge Analytica and the Internet Research Agency were targeting the same users. Zuckerberg replied, “We’re investigating that now. We believe that it is entirely possible that there will be a connection there.” According to NBC News , this was the first suggestion there is a link between the activities of Cambridge Analytica and the Russian disinformation campaign.

On June 11, 2018, nearly 500 pages of new testimony from Zuckerberg was released following promises of a follow-up to questions for which he did not have sufficient information to address during his Congressional testimony. The Washington Post notes that the release, “in some instances sidestepped lawmakers’ questions and concerns,” but that the questions being asked were not always relevant, particularly in the case of Sen. Ted Cruz, who attempted to bring attention to Facebook’s donations to political organizations, as well as how Facebook treats criticism of “Taylor Swift’s recent cover of an Earth, Wind and Fire song.”

  • Facebook gave Apple, Samsung access to data about users — and their friends (CNET)
  • Zuckerberg doubles down on Facebook’s fight against fake news, data misuse (CNET)
  • Tech execs react to Mark Zuckerberg’s apology: “I think he’s sorry he has to testify” (CBS News)
  • On Facebook, Zuckerberg gets privacy and you get nothing (ZDNet)
  • 6 Facebook security mistakes to fix on Data Privacy Day (CNET)
  • Zuckerberg takes Facebook data apology tour to Washington (CNET)
  • Zuckerberg’s Senate hearing highlights in 10 minutes (CNET via YouTube)
  • Russian politicians call on Facebook’s Mark Zuckerberg to testify on privacy (CNET)

What is the 2016 US presidential election connection to the Facebook data privacy scandal?

In December 2015, The Guardian broke the story of Cambridge Analytica being contracted by Ted Cruz’s campaign for the Republican Presidential Primary. Despite Cambridge Analytica CEO Alexander Nix’s claim i n an interview with TechRepublic that the company is “fundamentally politically agnostic and an apolitical organization,” the primary financier of the Cruz campaign is Cambridge Analytica co-founder Robert Mercer, who donated $11 million to a pro-Cruz Super PAC. Following Cruz’s withdrawal from the campaign in May 2016, the Mercer family began supporting Donald Trump.

In January 2016, Facebook COO Sheryl Sandberg told investors that the election was “a big deal in terms of ad spend,” and that through “using Facebook and Instagram ads you can target by congressional district, you can target by interest, you can target by demographics or any combination of those.”

In October 2017, Facebook announced changes to its advertising platform, requiring identity and location verification and prior authorization in order to run electoral advertising. In the wake of the fallout from the data privacy scandal, further restrictions were added in April 2018, making “issue ads” regarding topics of current interest similarly restricted .

In secretly recorded conversations by an undercover team from Channel 4 News, Cambridge Analytica’s Nix claimed the firm was behind the “defeat crooked Hillary” advertising campaign, adding, “We just put information into the bloodstream of the internet and then watch it grow, give it a little push every now and again over time to watch it take shape,” and that “this stuff infiltrates the online community, but with no branding, so it’s unattributable, untrackable.” The same exposé quotes Chief Data Officer Alex Tayler as saying, “When you think about the fact that Donald Trump lost the popular vote by 3 million votes but won the electoral college vote, that’s down to the data and the research.”

  • How Cambridge Analytica used your Facebook data to help elect Trump (ZDNet)
  • Facebook takes down fake accounts operated by ‘Roger Stone and his associates’ (ZDNet)
  • Facebook, Cambridge Analytica and data mining: What you need to know (CNET)
  • Civil rights auditors slam Facebook stance on Trump, voter suppression (ZDNet)
  • The Trump campaign app is tapping a “gold mine” of data about Americans (CBS News)

What is the Brexit tie-in to the Facebook data privacy scandal?

AggregateIQ was retained by Nigel Farage’s Vote Leave organization in the Brexit campaign , and both The Guardian and BBC claim that the Canadian company is connected to Cambridge Analytica and its parent organization SCL Group. UpGuard, the organization that found a public GitLab instance with code from AggregateIQ, has extensively detailed its connection to Cambridge Analytica and its involvement in Brexit campaigning .

Additionally, The Guardian quotes Wylie as saying the company “was set up as a Canadian entity for people who wanted to work on SCL projects who didn’t want to move to London.”

  • Brexit: A cheat sheet (TechRepublic)
  • Facebook suspends another data analytics firm, AggregateIQ (CBS News)
  • Lawmakers grill academic at heart of Facebook scandal (CBS News)

How is Facebook affected by the GDPR?

Like any organization providing services to users in European Union countries, Facebook is bound by the EU General Data Protection Regulation ( GDPR ). Due to the scrutiny Facebook is already facing regarding the Cambridge Analytica scandal, as well as the general nature of the social media giant’s product being personal information, its strategy for GDPR compliance is similarly receiving a great deal of focus from users and other companies looking for a model of compliance.

While in theory the GDPR is only applicable to people residing in the EU, Facebook will require users to review their data privacy settings. According to a ZDNet article , Facebook users will be asked if they want to see advertising based on partner information–in practice, websites that feature Facebook’s “Like” buttons. Users globally will be asked if they wish to continue sharing political, religious, and relationship information, while users in Europe and Canada will be given the option of switching automatic facial recognition on again.

Facebook members outside the US and Canada have heretofore been governed by the company’s terms of service in Ireland. This has reportedly been changed prior to the start of GDPR enforcement, as this would seemingly make Facebook liable for damages for users internationally, due to Ireland’s status as an EU member.

  • Google, Facebook hit with serious GDPR complaints: Others will be soon (ZDNet)
  • Facebook rolls out changes to comply with new EU privacy law (CBS News)
  • European court strikes down EU-US Privacy Shield user data exchange agreement as invalid (ZDNet)
  • GDPR security pack: Policies to protect data and achieve compliance (TechRepublic Premium)
  • IT pro’s guide to GDPR compliance (free PDF) (TechRepublic)

What are Facebook “shadow profiles?”

“Shadow profiles” are stores of information that Facebook has obtained about other people–who are not necessarily Facebook users. The existence of “shadow profiles” was discovered as a result of a bug in 2013. When a user downloaded their Facebook history, that user would obtain not just his or her address book, but also the email addresses and phone numbers of their friends that other people had stored in their address books.

Facebook described the issue in an email to the affected users. This is an excerpt of the email, according to security site Packet Storm:

When people upload their contact lists or address books to Facebook, we try to match that data with the contact information of other people on Facebook in order to generate friend recommendations. Because of the bug, the email addresses and phone numbers used to make friend recommendations and reduce the number of invitations we send were inadvertently stored in their account on Facebook, along with their uploaded contacts. As a result, if a person went to download an archive of their Facebook account through our Download Your Information (DYI) tool, which included their uploaded contacts, they may have been provided with additional email addresses or telephone numbers.

Because of the way that Facebook synthesizes data in order to attribute collected data to existing profiles, data of people who do not have Facebook accounts congeals into dossiers, which are popularly called a “shadow profile.” It is unclear what other sources of input are added to said “shadow profiles,” a term that Facebook does not use, according to Zuckerberg in his Senate testimony.

  • Shadow profiles: Facebook has information you didn’t hand over (CNET)
  • Finally, the world is getting concerned about data privacy (TechRepublic)
  • Firm: Facebook’s shadow profiles are ‘frightening’ dossiers on everyone (ZDNet)

What are the possible implications for enterprises and business users?

Business users and business accounts should be aware that they are as vulnerable as consumers to data exposure. Because Facebook harvests and shares metadata–including SMS and voice call records–between the company’s mobile applications, business users should be aware that their risk profile is the same as a consumer’s. The stakes for businesses and employees could be higher, given that incidental or accidental data exposure could expose the company to liability, IP theft, extortion attempts, and cybercriminals.

Though deleting or deactivating Facebook applications won’t prevent the company from creating so-called advertising “shadow profiles,” it will prevent the company from capturing geolocation and other sensitive data. For actional best practices, contact your company’s legal counsel.

  • Social media policy (TechRepublic Premium)
  • Want to attain and retain customers? Adopt data privacy policies (TechRepublic)
  • Hiring kit: Digital campaign manager (TechRepublic Premium)
  • Photos: All the tech celebrities and brands that have deleted Facebook (TechRepublic)

How can I change my Facebook privacy settings?

According to Facebook, in 2014 the company removed the ability for apps that friends use to collect information about an individual user. If you wish to disable third-party use of Facebook altogether–including Login With Facebook and apps that rely on Facebook profiles such as Tinder–this can be done in the Settings menu under Apps And Websites. The Apps, Websites And Games field has an Edit button–click that, and then click Turn Off.

Facebook has been proactively notifying users who had their data collected by Cambridge Analytica, though users can manually check to see if their data was shared by going to this Facebook Help page .

Facebook is also developing a Clear History button, which the company indicates is “their database record of you.” CNET and CBS News Senior Producer Dan Patterson noted on CBSN that “there aren’t a lot of specifics on what that clearing of the database will do, and of course, as soon as you log back in and start creating data again, you set a new cookie and you start the process again.”

To gain a better understanding of how Facebook handles user data, including what options can and cannot be modified by end users, it may be helpful to review Facebook’s Terms of Service , as well as its Data Policy and Cookies Policy .

  • Ultimate guide to Facebook privacy and security (Download.com)
  • Facebook’s new privacy tool lets you manage how you’re tracked across the web (CNET)
  • Securing Facebook: Keep your data safe with these privacy settings (ZDNet)
  • How to check if Facebook shared your data with Cambridge Analytica (CNET)

Note: This article was written and reported by James Sanders and Dan Patterson. It was updated by Brandon Vigliarolo.

facebook case study summary

Subscribe to the Cybersecurity Insider Newsletter

Strengthen your organization's IT security defenses by keeping abreast of the latest cybersecurity news, solutions, and best practices. Delivered Tuesdays and Thursdays

Image of TechRepublic Staff

Create a TechRepublic Account

Get the web's best business technology news, tutorials, reviews, trends, and analysis—in your inbox. Let's start with the basics.

* - indicates required fields

Sign in to TechRepublic

Lost your password? Request a new password

Reset Password

Please enter your email adress. You will receive an email message with instructions on how to reset your password.

Check your email for a password reset link. If you didn't receive an email don't forgot to check your spam folder, otherwise contact support .

Welcome. Tell us a little bit about you.

This will help us provide you with customized content.

Want to receive more TechRepublic news?

You're all set.

Thanks for signing up! Keep an eye out for a confirmation email from our team. To ensure any newsletters you subscribed to hit your inbox, make sure to add [email protected] to your contacts list.

Facebook’s ethical failures are not accidental; they are part of the business model

  • Opinion Paper
  • Published: 05 June 2021
  • Volume 1 , pages 395–403, ( 2021 )

Cite this article

facebook case study summary

  • David Lauer   ORCID: orcid.org/0000-0002-0003-4521 1  

85k Accesses

19 Citations

783 Altmetric

31 Mentions

Explore all metrics

Avoid common mistakes on your manuscript.

Facebook’s stated mission is “to give people the power to build community and bring the world closer together.” But a deeper look at their business model suggests that it is far more profitable to drive us apart. By creating “filter bubbles”—social media algorithms designed to increase engagement and, consequently, create echo chambers where the most inflammatory content achieves the greatest visibility—Facebook profits from the proliferation of extremism, bullying, hate speech, disinformation, conspiracy theory, and rhetorical violence. Facebook’s problem is not a technology problem. It is a business model problem. This is why solutions based in technology have failed to stem the tide of problematic content. If Facebook employed a business model focused on efficiently providing accurate information and diverse views, rather than addicting users to highly engaging content within an echo chamber, the algorithmic outcomes would be very different.

Facebook’s failure to check political extremism, [ 15 ] willful disinformation, [ 39 ] and conspiracy theory [ 43 ] has been well-publicized, especially as these unseemly elements have penetrated mainstream politics and manifested as deadly, real-world violence. So it naturally raised more than a few eyebrows when Facebook’s Chief AI Scientist Yann LeCun tweeted his concern [ 32 ] over the role of right-wing personalities in downplaying the severity of the COVID-19 pandemic. Critics were quick to point out [ 29 ] that Facebook has profited handsomely from exactly this brand of disinformation. Consistent with Facebook’s recent history on such matters, LeCun was both defiant and unconvincing.

In response to a frenzy of hostile tweets, LeCun made the following four claims:

Facebook does not cause polarization or so-called “filter bubbles” and that “most serious studies do not show this.”

Critics [ 30 ] who argue that Facebook is profiting from the spread of misinformation—are “factually wrong.” Footnote 1

Facebook uses AI-based technology to filter out [ 33 ]:

Hate speech;

Calls to violence;

Bullying; and

Disinformation that endangers public safety or the integrity of the democratic process.

Facebook is not an “arbiter of political truth” and that having Facebook “arbitrate political truth would raise serious questions about anyone’s idea of ethics and liberal democracy.”

Absent from the claims above is acknowledgement that the company’s profitability depends substantially upon the polarization LeCun insists does not exist.

Facebook has had a profound impact on our access to ideas, information, and one another. It has unprecedented global reach, and in many markets serves as a de-facto monopolist. The influence it has over individual and global affairs is unique in human history. Mr. LeCun has been at Facebook since December 2013, first as Director of AI Research and then as Chief AI Scientist. He has played a leading role in shaping Facebook’s technology and approach. Mr. LeCun’s problematic claims demand closer examination. What follows, therefore, is a response to these claims which will clearly demonstrate that Facebook:

Elevates disinformation campaigns and conspiracy theories from the extremist fringes into the mainstream, fostering, among other effects, the resurgent anti-vaccination movement, broad-based questioning of basic public health measures in response to COVID-19, and the proliferation of the Big Lie of 2020—that the presidential election was stolen through voter fraud [ 16 ];

Empowers bullies of every size, from cyber-bullying in schools, to dictators who use the platform to spread disinformation, censor their critics, perpetuate violence, and instigate genocide;

Defrauds both advertisers and newsrooms, systematically and globally, with falsified video engagement and user activity statistics;

Reflects an apparent political agenda espoused by a small core of corporate leaders, who actively impede or overrule the adoption of good governance;

Brandishes its monopolistic power to preserve a social media landscape absent meaningful regulatory oversight, privacy protections, safety measures, or corporate citizenship; and

Disrupts intellectual and civil discourse, at scale and by design.

1 I deleted my Facebook account

I deleted my account years ago for the reasons noted above, and a number of far more personal reasons. So when LeCun reached out to me, demanding evidence for my claims regarding Facebook’s improprieties, it was via Twitter. What proof did I have that Facebook creates filter bubbles that drive polarization?

In anticipation of my response, he offered the claims highlighted above. As evidence of his claims, he directed my attention to a single research paper [ 23 ] that, on closer inspection, does not appear at all to reinforce his case.

The entire exchange also suggests that senior leadership at Facebook still suffers from a massive blindspot regarding the harm that its platform causes—that they continue to “move fast and break things” without regard for the global impact of their behavior.

LeCun’s comments confirm the concerns that many of us have held for a long time: Facebook has declined to resolve its systemic problems, choosing instead to paper over these deep philosophical flaws with advanced, though insufficient, technological solutions. Even when Facebook takes occasion to announce its triumphs in the ethical use of AI, such as its excellent work [ 8 ] detecting suicidal tendencies, its advancements pale in comparison to the inherent problems written into its algorithms.

This is because, fundamentally, their problem is not a failure of technology, nor a shortcoming in their AI filters. Facebook’s problem is its business model. Facebook makes superficial technology changes, but at its core, profits chiefly from engagement and virality. Study after study has found that “lies spread faster than the truth,” [ 47 ] “conspiracy theories spread through a more decentralized network,” [ 41 ] and that “politically extreme sources tend to generate more interactions from users.” Footnote 2 Facebook knows that the most efficient way to maximize profitability is to build algorithms that create filter bubbles and spread viral misinformation.

This is not a fringe belief or controversial opinion. This is a reality acknowledged even by those who have lived inside of Facebook’s leadership structure. As the former director of monetization for Facebook, Tim Kendall explained in his Congressional testimony, “social media services that I, and others have built, have torn people apart with alarming speed and intensity. At the very least we have eroded our collective understanding—at worst, I fear we are pushing ourselves to the brink of a civil war.” [ 38 ]

2 Facebook’s black box

To effectively study behavior on Facebook, we must be able to study Facebook’s algorithms and AI models. Therein lies the first problem. The data and transparency to do so are simply not there. Facebook does not practice transparency—they do not make comprehensive data available on their recommendation and filtering algorithms, or their other implementations of AI. One organization attempting to study the spread of misinformation, NYU’s Cybersecurity for Democracy, explains, “[o]ur findings are limited by the lack of data provided by Facebook…. Without greater transparency and access to data, such research questions are out of reach.” Footnote 3

Facebook’s algorithms and AI models are proprietary, and they are intentionally hidden from us. While this is normal for many companies, no other company has 2.85 billion monthly active users. Any platform that touches so many lives must be studied so that we can truly understand its impact. Yet Facebook does not make the kind of data available that is needed for robust study of the platform.

Facebook would likely counter this, and point to their partnership with Harvard’s Institute for Quantitative Social Science (Social Science One) as evidence that they are making data available to researchers [ 19 ]. While this partnership is one step in the right direction, there are several problems with this model:

The data are extremely limited. At the moment it consists solely of web page addresses that have been shared on Facebook for 18 months from 2017 to 2019.

Researchers have to apply for access to the data through Social Science One, which acts as a gatekeeper of the data.

If approved, researchers have to execute an agreement directly with Facebook.

This is not an open, scientific process. It is, rather, a process that empowers administrators to cherry-pick research projects that favor their perspective. If Facebook was serious about facilitating academic research, they would provide far greater access to, availability of, and insight into the data. There are legitimate privacy concerns around releasing data, but there are far better ways to address those concerns while fostering open, vibrant research.

3 Does Facebook cause polarization?

LeCun cited a single study as evidence that Facebook does not cause polarization. But do the findings of this study support Mr. LeCun’s claims?

The study concludes that “polarization has increased the most among the demographic groups least likely to use the Internet and social media.” The study does not, however, actually measure this type of polarization directly. Its primary data-gathering instrument—a survey on polarization—did not ask whether respondents were on the Internet or if they used social media. Instead, the study estimates whether an individual respondent is likely to be on the Internet based on an index of demographic factors which suggest “predicted” Internet use. As explained in the study, “the main predictor [they] focus on is age” [ 23 ]. Age is estimated to be negatively correlated with social media usage. Therefore, since older people are also shown to be more politically polarized, LeCun takes this as evidence that social media use does not cause polarization.

This assumption of causality is flawed. The study does not point to a causal relationship between these demographic factors and social media use. It simply says that these demographic factors drive polarization. Whether these factors have a correlational or causative relationship with the Internet and social media use is complete conjecture. The author of the study himself caveats any such conclusions, noting that “[t]hese findings do not rule out any effect of the internet or social media on political polarization.” [ 5 ].

Not only is LeCun’s assumption flawed, it is directly refuted by a recent Pew Research study [ 3 ] that found an overwhelmingly high percentage of US adults age 65 + are on Facebook (50%), the most of any social network. If anything, older age is actually more clearly correlated with Facebook use relative to other social networks.

Moreover, in 2020, the MIS Quarterly journal published a study by Steven L. Johnson, et al. that explored this problem and found that the “more time someone spends on Facebook, the more polarized their online news consumption becomes. This evidence suggests Facebook indeed serves as an echo chamber especially for its conservative users” [ 24 ].

Allcott, et al. also explores this question in “The Welfare Effects of Social Media” in November, 2019, beginning with a review of other studies confirming a relationship between social media use, well-being and political polarization [ 1 ]:

More recent discussion has focused on an array of possible negative impacts. At the individual level, many have pointed to negative correlations between intensive social media use and both subjective well-being and mental health. Adverse outcomes such as suicide and depression appear to have risen sharply over the same period that the use of smartphones and social media has expanded. Alter (2018) and Newport (2019), along with other academics and prominent Silicon Valley executives in the “time well-spent” movement, argue that digital media devices and social media apps are harmful and addictive. At the broader social level, concern has focused particularly on a range of negative political externalities. Social media may create ideological “echo chambers” among like-minded friend groups, thereby increasing political polarization (Sunstein 2001, 2017; Settle 2018). Furthermore, social media are the primary channel through which misinformation spreads online (Allcott and Gentzkow 2017), and there is concern that coordinated disinformation campaigns can affect elections in the US and abroad.

Allcott’s 2019 study uses a randomized experiment in the run-up to the November 2018 midterm elections to examine how Facebook affects several individual and social welfare measures. They found that:

deactivating Facebook for the four weeks before the 2018 US midterm election (1) reduced online activity, while increasing offline activities such as watching TV alone and socializing with family and friends; (2) reduced both factual news knowledge and political polarization; (3) increased subjective well-being; and (4) caused a large persistent reduction in post-experiment Facebook use.

In other words, not using Facebook for a month made you happier and resulted in less future usage. In fact, they say that “deactivation significantly reduced polarization of views on policy issues and a measure of exposure to polarizing news.” None of these findings would come as a surprise to anybody who works at Facebook.

“A former Facebook AI researcher” confirmed that they ran “‘study after study’ confirming the same basic idea: models that maximize engagement increase polarization” [ 21 ]. Not only did Facebook know this, but they continued to design and build their recommendation algorithms to maximize user engagement, knowing that this meant optimizing for extremism and polarization. Footnote 4

Facebook understood what they were building according to Tim Kendall’s Congressional testimony in 2020. He explained that “we sought to mine as much attention as humanly possible and turn [sic] into historically unprecedented profits” [ 38 ]. He went on to explain that their inspiration was “Big Tobacco’s playbook … to make our offering addictive at the outset.” They quickly figured out that “extreme, incendiary content” directly translated into “unprecedented engagement—and profits.” He was the director of monetization for Facebook—few would have been better positioned to understand Facebook’s motivations, findings and strategy.

4 Engagement, filter bubbles, and executive compensation

The term “filter bubble” was coined by Eli Pariser who wrote a book with that title, exploring how social media algorithms are designed to increase engagement and create echo chambers where inflammatory posts are more likely to go viral. Filter bubbles are not just an algorithmic outcome; often we filter our own lives, surrounding ourselves with friends (online and offline) who are more likely to agree with our philosophical, religious and political views.

Social media platforms capitalize on our natural tendency toward filtered engagement. These platforms build algorithms, and structure executive compensation, [ 27 ] to maximize such engagement. By their very design, social media curation and recommendation algorithms are engineered to maximize engagement, and thus, are predisposed to create filter bubbles.

Facebook has long attracted criticism for its pursuit of growth at all costs. A recent profile of Facebook’s AI efforts details the difficulty of getting “buy-in or financial support when the work did not directly improve Facebook’s growth.” [ 21 ]. Andrew Bosworth, a Vice President at Facebook said in a 2016 memo that nothing matters but growth, and that “all the work we do in growth is justified” regardless of whether “it costs someone a life by exposing someone to bullies” or if “somebody dies in a terrorist attack coordinated on our tools” [ 31 ].

Bosworth and Zuckerberg went on to claim [ 36 ] that the shocking memo was merely an attempt at being provocative. Certainly, it succeeded in this aim. But what else could they really say? It’s not a great look. And it looks even worse when you consider that Facebook’s top brass really do get paid more when these things happen. The above-referenced report is based on interviews with multiple former product managers at Facebook, and shows that their executive compensation system is largely based around their most important metric–user engagement. This creates a perverse incentive. And clearly, by their own admission, Facebook will not allow a few casualties to get in the way of their executive compensation.

5 Is it incidental or intentional?

Yaël Eisenstat, a former CIA analyst who specialized in counter-extremism went on to work at Facebook out of concern that the social media platform was increasing radicalization and political polarization. She explained in a TED talk [ 13 ] that the current information ecosystem is manipulating its users, and that “social media companies like Facebook profit off of segmenting us and feeding us personalized content that both validates and exploits our biases. Their bottom line depends on provoking a strong emotion to keep us engaged, often incentivizing the most inflammatory and polarizing voices.” This emotional response results in more than just engagement—it results in addiction.

Eisenstat joined Facebook in 2018 and began to explore the issues which were most divisive on the social media platform. She began asking questions internally about what was causing this divisiveness. She found that “the largest social media companies are antithetical to the concept of reasoned discourse … Lies are more engaging online than truth, and salaciousness beats out wonky, fact-based reasoning in a world optimized for frictionless virality. As long as algorithms’ goals are to keep us engaged, they will continue to feed us the poison that plays to our worst instincts and human weaknesses.”

She equated Facebook’s algorithmic manipulation to the tactics that terrorist recruiters use on vulnerable youth. She offered Facebook a plan to combat political disinformation and voter suppression. She has claimed that the plan was rejected, and Eisenstat left after just six months.

As noted earlier, LeCun flatly denies [ 34 ] that Facebook creates filter bubbles that drive polarization. In sharp contrast, Eisenstat explains that such an outcome is a feature of their algorithm, not a bug. The Wall St. Journal reported that in 2018, senior executives at Facebook were informed of the following conclusions during an internal presentation [ 22 ]:

“Our algorithms exploit the human brain’s attraction to divisiveness… [and] if left unchecked,” Facebook would feed users “more and more divisive content in an effort to gain user attention and increase time on the platform.”

The platform aggravates polarization and tribal behavior.

Some proposed algorithmic changes would “disproportionately affect[] conservative users and publishers.”

Looking at data for Germany, an internal report found “64% of all extremist group joins are due to our recommendation tools … Our recommendation systems grow the problem.”

These are Facebook’s own words, and arguably, they provide the social media platform with an invaluable set of marketing prerogatives. They are reinforced by Tim Kendall’s testimony as discussed above.

“Most notably,” reported the WSJ, “the project forced Facebook to consider how it prioritized ‘user engagement’—a metric involving time spent, likes, shares and comments that for years had been the lodestar of its system.” As noted in the section above, executive compensation was tied to “user engagement,” which meant product developers at Facebook were incentivized to design systems in this very way. Footnote 5

Mark Zuckerberg and Joel Kaplan reportedly [ 22 ] dismissed the conclusions from the 2018 presentation, calling efforts to bring greater civility to conversations on the social media platform “paternalistic.” Zuckerberg went on to say that he would “stand up against those who say that new types of communities forming on social media are dividing us.” Kaplan reportedly “killed efforts to build a classification system for hyperpolarized content.” Failing to address this has resulted in algorithms that, as Tim Kendall explained, “have brought out the worst in us. They have literally rewired our brains so that we are detached from reality and immersed in tribalism” [ 38 ].

Facebook would have us believe that it has made great strides in confronting these problems over just the last two years, as Mr. LeCun has claimed. But at present, the burden of proof is on Facebook to produce the full, raw data so that independent researchers can make a fair assessment of his claims.

6 The AI filter

According to LeCun’s tweets cited at the beginning of this paper, Facebook’s AI-powered filter cleanses the platform of:

Disinformation that endangers public safety or the integrity of the democratic process

These are his words, so we will refer to them even while the actual definitions of hate speech, calls to violence, and other terms are potentially controversial and open to debate.

These claims are provably false. While “AI” (along with some very large, manual curation operations in developing countries) may effectively filter some of this content, at Facebook’s scale, some is not enough.

Let’s examine the claims a little closer.

6.1 Does Facebook actually filter out hate speech?

An investigation by the UK-based counter-extremist organization ISD (Institute for Strategic Dialog) found that Facebook’s algorithm “actively promotes” Holocaust denial content [ 20 ]. The same organization, in another report, documents how Facebook’s “delays or mistakes in policy enforcement continue to enable hateful and harmful content to spread through paid targeted ads.” [ 17 ]. They go on to explain that “[e]ven when action is taken on violating ad content, such a response is often reactive and delayed, after hundreds, thousands, or potentially even millions of users have already been served those ads on their feeds.” Footnote 6

Zuckerberg admitted in April 2018 that hate speech in Myanmar was a problem, and pledged to act. Four months later, Reuters found more than “1000 examples of posts, comments, images and videos attacking the Rohingya or other Myanmar Muslims that were on Facebook” [ 45 ]. As recently as June 2020 there were reports [ 7 ] of troll farms using Facebook to intimidate opponents of Rodrigo Duterte in the Philippines with death threats and hateful comments.

6.2 Does Facebook actually filter out calls to violence?

The Sri Lankan government had to block access to Facebook “amid a wave of violence against Muslims … after Facebook ignored years of calls from both the government and civil society groups to control ethnonationalist accounts that spread hate speech and incited violence.” [ 42 ] A report from the Center for Policy Alternatives in September 2014 detailed evidence of 20 hate groups in Sri Lanka, and informed Facebook. In March of 2018, Buzzfeed reported that “16 out of the 20 groups were still on Facebook”. Footnote 7

When former President Trump tweeted, in response to Black Lives Matters protests, when “the looting starts, the shooting starts,” the message was liked and shared hundreds of thousands of times across Facebook and Instagram, even as other social networks such as Twitter flagged the message for its explicit incitement of violence [ 48 ] and prevented it from being retweeted.

Facebook played a pivotal role in the planning of the January 6th insurrection in the US, providing an unchecked platform for proliferation of the Big Lie, radicalization around this lie, and coordinated organization around explicitly-stated plans to engage in violent confrontation at the nation’s capital on the outgoing president’s behalf. Facebook’s role in the deadly violence was far greater and more widespread than the role of Parler and the other fringe right-wing platforms that attracted so much attention in the aftermath of the attack [ 11 ].

6.3 Does Facebook actually filter out cyberbullying?

According to Enough Is Enough, a non-partisan, non-profit organization whose mission is “making the Internet safer for children and families,” the answer is a resounding no. According to their most recent cyberbullying statistics, [ 10 ] 47% of young people have been bullied online, and the two most prevalent platforms are Instagram at 42% and Facebook at 37%.

In fact, Facebook is failing to protect children on a global scale. According to a UNICEF poll of children in 30 countries, one in every three young people says that they have been victimized by cyberbullying. And one in five says the harassment and threat of actual violence caused them to skip school. According to the survey, conducted in concert with the UN Special Representative of the Secretary-General (SRSG) on Violence against Children, “almost three-quarters of young people also said social networks, including Facebook, Instagram, Snapchat and Twitter, are the most common place for online bullying” [ 49 ].

6.4 Does Facebook actually filter out “disinformation that endangers public safety or the integrity of the democratic process?”

To list the evidence contradicting this point would be exhausting. Below are just a few examples:

The Computational Propaganda Research Project found in their 2019 Global Inventory of Organized Social Media Manipulation that 70 countries had disinformation campaigns organized on social media in 2019, with Facebook as the top platform [ 6 ].

A Facebook whistleblower produced a 6600 word memo detailing case after case of Facebook “abdicating responsibility for malign activities on its platform that could affect the political fate of nations outside the United States or Western Europe.” [ 44 ]

Facebook is ground-zero for anti-vaccination and pandemic misinformation, with the 26-min conspiracy theory film “Plandemic” going viral on Facebook in April 2020 and garnering tens of millions of views. Facebook’s attempt to purge itself of anti-vaccination disinformation was easily thwarted when the groups guilty of proliferating this content removed the word “vaccine” from their names. In addition to undermining public health interests by spreading provably false content, these anti-vaccination groups have obscured meaningful discourse about the actual health concerns and risks that may or may not be connected to vaccinations. A paper from May 2020 attempts to map out the “multi-sided landscape of unprecedented intricacy that involves nearly 100 million individuals” [ 25 ] that are entangled with anti-vaccination clusters. That report predicts that such anti-vaccination views “will dominate in a decade” given their explosive growth and intertwining with undecided people. According to the Knight Foundation and Gallup, [ 26 ] 75% of Americans believe they “were exposed to misinformation about the election” on Facebook during the 2020 US presidential election. This is one of those rare issues on which Republicans (76%), Democrats (75%) and Independents (75%) agree–Facebook was the primary source for election misinformation.

If those AI filters are in fact working, they are not working very well.

All of this said, Facebook’s reliance on “AI filters” misses a critical point, which is that you cannot have AI ethics without ethics [ 30 ]. These problems cannot be solved with AI. These problems cannot be solved with checklists, incremental advances, marginal changes, or even state-of-the-art deep learning networks. These problems are caused by the company’s entire business model and mission. Bosworth’s provocative quotes above, along with Tim Kendall’s direct testimony demonstrate as much.

These are systemic issues, not technological ones. Yael Eisenstat put it best in her TED talk: “as long as the company continues to merely tinker around the margins of content policy and moderation, as opposed to considering how the entire machine is designed and monetized, they will never truly address how the platform is contributing to hatred, division and radicalization.”

7 Facebook does not want to be the arbiter of truth

We should probably take comfort in Facebook’s claim that it does not wish to be the “arbiter of political truth.” After all, Facebook has a troubled history with the truth. Their ad buying customers proved as much when Facebook was forced to pay $40 million to settle a lawsuit alleging that they had inflated “by up to 900 percent—the time it said users spent watching videos.” [ 4 ] While Facebook would neither admit nor deny the truth of this allegation, they did admit to the error in a 2016 statement [ 14 ].

This was not some innocuous lie that just cost a few firms some money either. As Slate explained in a 2018 article, “many [publications] laid off writers and editors and cut back on text stories to focus on producing short, snappy videos for people to watch in their Facebook feeds.” [ 40 ] People lost their livelihoods to this deception.

Is this an isolated incident? Or is fraud at Facebook systemic? Matt Stoller describes the contents of recently unsealed legal documents [ 12 ] in a lawsuit alleging Facebook has defrauded advertisers for years [ 46 ]:

The documents revealed that Facebook COO Sheryl Sandberg directly oversaw the alleged fraud for years. The scheme was simple. Facebook deceived advertisers by pretending that fake accounts represented real people, because ad buyers choose to spend on ad campaigns based on where they think their customers are. Former employees noted that the corporation did not care about the accuracy of numbers as long as the ad money was coming in. Facebook, they said, “did not give a shit.” The inflated statistics sometimes led to outlandish results. For instance, Facebook told advertisers that its services had a potential reach of 100 million 18–34-year-olds in the United States, even though there are only 76 million people in that demographic. After employees proposed a fix to make the numbers honest, the corporation rejected the idea, noting that the “revenue impact” for Facebook would be “significant.” One Facebook employee wrote, “My question lately is: how long can we get away with the reach overestimation?” According to these documents, Sandberg aggressively managed public communications over how to talk to advertisers about the inflated statistics, and Facebook is now fighting against her being interviewed by lawyers in a class action lawsuit alleging fraud.

Facebook’s embrace of deception extends from its ad-buying fraud to the content on its platforms. For instance:

Those who would “aid[] and abet[] the spread of climate misinformation” on Facebook benefit from “a giant loophole in its fact-checking program.” Evidently, Facebook gives its staff the power to overrule climate scientists by deeming climate disinformation “opinion.” [ 2 ].

The former managing editor of Snopes reported that Facebook was merely using the well-regarded fact-checking site for “crisis PR,” that they did not take fact checking seriously and would ignore concerns [ 35 ]. Snopes tried hard to push against the Myanmar disinformation campaign, amongst many other issues, but its concerns were ignored.

ProPublica recently reported [ 18 ] that Sheryl Sandberg silenced and censored a Kurdish militia group that “the Turkish government had targeted” in order to safeguard their revenue from Turkey.

Mark Zuckerberg and Joel Kaplan intervened [ 37 ] in April 2019 to keep Alex Jones on the platform, despite the right-wing conspiracy theorist’s lead role in spreading disinformation about the 2012 Sandy Hook elementary school shooting and the 2018 Parkland high school shooting.

Arguably, Facebook’s executive team has not only ceded responsibility as an “arbiter of truth,” but has also on several notable occasions, intervened to ensure the continued proliferation of disinformation.

8 How do we disengage?

Facebook’s business model is focused entirely on increasing growth and user engagement. Its algorithms are extremely effective at doing so. The steps Facebook has taken, such as building “AI filters” or partnering with independent fact checkers, are superficial and toothless. They cannot begin to untangle the systemic issues at the heart of this matter, because these issues are Facebook’s entire reason for being.

So what can be done? Certainly, criminality needs to be prosecuted. Executives should go to jail for fraud. Social media companies, and their organizational leaders, should face legal liability for the impact made by the content on their platforms. One effort to impose legal liability in the US is centered around reforming section 230 of the US Communications Decency Act. It, and similar laws around the world, should be reformed to create far more meaningful accountability and liability for the promotion of disinformation, violence, and extremism.

Most importantly, monopolies should be busted. Existing antitrust laws should be used to break up Facebook and restrict its future activities and acquisitions.

The matters outlined here have been brought to the attention of Facebook’s leadership in countless ways that are well documented and readily provable. But the changes required go well beyond effective leveraging of AI. At its heart, Facebook will not change because they do not want to, and are not incentivized to. Facebook must be regulated, and Facebook’s leadership structure must be dismantled.

It seems unlikely that politicians and regulators have the political will to do all of this, but there are some encouraging signs, especially regarding antitrust investigations [ 9 ] and lawsuits [ 28 ] in both the US and Europe. Still, this issue goes well beyond mere enforcement. Somehow we must shift the incentives for social media companies, who compete for, and monetize, our attention. Until we stop rewarding Facebook’s illicit behavior with engagement, it’s hard to see a way out of our current condition. These companies are building technology that is designed to draw us in with problematic content, addict us to outrage, and ultimately drive us apart. We no longer agree on shared facts or truths, a condition that is turning political adversaries into bitter enemies, that is transforming ideological difference into seething contempt. Rather than help us lead more fulfilling lives or find truth, Facebook is helping us to discover enemies among our fellow citizens, and bombarding us with reasons to hate them, all to the end of profitability. This path is unsustainable.

The only thing Facebook truly understands is money, and all of their money comes from engagement. If we disengage, they lose money. If we delete, they lose power. If we decline to be a part of their ecosystem, perhaps we can collectively return to a shared reality.

Facebook executives have, themselves, acknowledged that Facebook profits from the spread of misinformation: https://www.facebook.com/facebookmedia/blog/working-to-stop-misinformation-and-false-news .

Cybersecurity for Democracy. (March 3, 2021). “Far-right news sources on Facebook more engaging.” https://medium.com/cybersecurity-for-democracy/far-right-news-sources-on-facebook-more-engaging-e04a01efae90 .

Facebook claims to have since broadened the metrics it uses to calculate executive pay, but to what extent this might offset the prime directive of maximizing user engagement is unclear.

Allcot, H., et al.: “The Welfare Effects of Social Media.” (2019). https://web.stanford.edu/~gentzkow/research/facebook.pdf

Atkin, E.: Facebook creates fact-checking exemption for climate deniers. Heated . (2020). https://heated.world/p/facebook-creates-fact-checking-exemption

Auxier, B., Anderson, M.: Social Media Use in 2021. Pew Research Center. (2021). https://www.pewresearch.org/internet/wp-content/uploads/sites/9/2021/04/PI_2021.04.07_Social-Media-Use_FINAL.pdf

Baron, E.: Facebook agrees to pay $40 million over inflated video-viewing times but denies doing anything wrong. The Mercury News . (2019). https://www.mercurynews.com/2019/10/07/facebook-agrees-to-pay-40-million-over-inflated-video-viewing-times-but-denies-doing-anything-wrong/

Boxell, L.: “The internet, social media, and political polarisation.” (2017). https://voxeu.org/article/internet-social-media-and-political-polarisation

Bradshaw, S., Howard, P.N.: The Global Disinformation Disorder: 2019 Global Inventory of Organised Social Media Manipulation. Working Paper 2019.2. Oxford: Project on Computational Propaganda. (2019)

Cabato, R.: Death threats, clone accounts: Another day fighting trolls in the Philippines. The Washington Post . (2020). https://www.washingtonpost.com/world/asia_pacific/facebook-trolls-philippines-death-threats-clone-accounts-duterte-terror-bill/2020/06/08/3114988a-a966-11ea-a43b-be9f6494a87d_story.html

Card, C.: “How Facebook AI Helps Suicide Prevention.” Facebook. (2018). https://about.fb.com/news/2018/09/inside-feed-suicide-prevention-and-ai/

Chee, F.Y.: “Facebook in EU antitrust crosshairs over data collection.” Reuters. (2019). https://www.reuters.com/article/us-eu-facebook-antitrust-idUSKBN1Y625J

Cyberbullying Statistics. Enough Is Enough. https://enough.org/stats_cyberbullying

Dwoskin, E.: Facebook’s Sandberg deflected blame for Capitol riot, but new evidence shows how platform played role. The Washington Post . (2021). https://www.washingtonpost.com/technology/2021/01/13/facebook-role-in-capitol-protest

DZ Reserve and Cain Maxwell v. Facebook, Inc. (2020). https://www.economicliberties.us/wp-content/uploads/2021/02/2021.02.17-Unredacted-Opp-to-Mtn-to-Dismiss.pdf

Eisenstat, Y.: Dear Facebook, this is how you’re breaking democracy [Video]. TED . (2020). https://www.ted.com/talks/yael_eisenstat_dear_facebook_this_is_how_you_re_breaking_democracy#t-385134

Fischer, D.: Facebook Video Metrics Update. Facebook . (2016). https://www.facebook.com/business/news/facebook-video-metrics-update

Fisher, M., Taub, A.: “How Everyday Social Media Users Become Real-World Extremists.” New York Times . (2018). https://www.nytimes.com/2018/04/25/world/asia/facebook-extremism.html

Frenkel, S.: “How Misinformation ‘Superspreaders’ Seed False Election Theories”. New York Times . (2020). https://www.nytimes.com/2020/11/23/technology/election-misinformation-facebook-twitter.html

Gallagher, A.: Profit and Protest: How Facebook is struggling to enforce limits on ads spreading hate, lies and scams about the Black Lives Matter protests . The Institute for Strategic Dialogue (2020)

Gillum, J., Ellion, J.: Sheryl Sandberg and Top Facebook Execs Silenced an Enemy of Turkey to Prevent a Hit to the Company’s Business. ProPublica . (2021). https://www.propublica.org/article/sheryl-sandberg-and-top-facebook-execs-silenced-an-enemy-of-turkey-to-prevent-a-hit-to-their-business

Gonzalez, R.: “Facebook Opens Its Private Servers to Scientists Studying Fake News.” Wired . (2018). https://www.wired.com/story/social-science-one-facebook-fake-news/

Guhl, J., Davey, J.: Hosting the ‘Holohoax’: A Snapshot of Holocaust Denial Across Social Media . The Institute for Strategic Dialogue (2020).

Hao, K.: “How Facebook got addicted to spreading misinformation”. MIT Technology Review . (2021). https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation

Horwitz, J., Seetharaman, D.: “Facebook Executives Shut Down Efforts to Make the Site Less Divisive.” Wall St Journal (2020)

Internet use and political polarization, Boxell, L., Gentzkow, M., Shapiro, J.M.: Proc Natl. Acad. Sci. 114 (40), 10612–10617 (2017). https://doi.org/10.1073/pnas.1706588114

Johnson, S.L., et al.: Understanding echo chambers and filter bubbles: the impact of social media on diversification and partisan shifts in news consumption. MIS Q. (2020). https://doi.org/10.25300/MISQ/2020/16371

Article   Google Scholar  

Johnson, N.F., Velásquez, N., Restrepo, N.J., et al.: The online competition between pro- and anti-vaccination views. Nature 582 , 230–233 (2020). https://doi.org/10.1038/s41586-020-2281-1

Jones, J.: In Election 2020, How Did The Media, Electoral Process Fare? Republicans, Democrats Disagree. Knight Foundation . (2020). https://knightfoundation.org/articles/in-election-2020-how-did-the-media-electoral-process-fare-republicans-democrats-disagree

Kantrowitz, A.: “Facebook Is Still Prioritizing Scale Over Safety.” Buzzfeed.News . (2019). https://www.buzzfeednews.com/article/alexkantrowitz/after-years-of-scandal-facebooks-unhealthy-obsession-with

Kendall, B., McKinnon, J.D.: “Facebook Hit With Antitrust Lawsuits by FTC, State Attorneys General.” Wall St. Journal. (2020). https://www.wsj.com/articles/facebook-hit-with-antitrust-lawsuit-by-federal-trade-commission-state-attorneys-general-11607543139

Lauer, D.: [@dlauer]. And yet people believe them because of misinformation that is spread and monetized on facebook [Tweet]. Twitter. (2021). https://twitter.com/dlauer/status/1363923475040251905

Lauer, D.: You cannot have AI ethics without ethics. AI Ethics 1 , 21–25 (2021). https://doi.org/10.1007/s43681-020-00013-4

Lavi, M.: Do Platforms Kill? Harvard J. Law Public Policy. 43 (2), 477 (2020). https://www.harvard-jlpp.com/wp-content/uploads/sites/21/2020/03/Lavi-FINAL.pdf

LeCun, Y.: [@ylecun]. Does anyone still believe whatever these people are saying? No one should. Believing them kills [Tweet]. Twitter. (2021). https://twitter.com/ylecun/status/1363923178519732230

LeCun, Y.: [@ylecun]. The section about FB in your article is factually wrong. For starter, AI is used to filter things like hate speech, calls to violence, bullying, child exploitation, etc. Second, disinformation that endangers public safety or the integrity of the democratic process is filtered out [Tweet]. Twitter. (2021). https://twitter.com/ylecun/status/1364010548828987393

LeCun, Y.: [@ylecun]. As attractive as it may seem, this explanation is false. [Tweet]. Twitter. (2021). https://twitter.com/ylecun/status/1363985013147115528

Levin, S.: ‘They don’t care’: Facebook factchecking in disarray as journalists push to cut ties. The Guardian . (2018). https://www.theguardian.com/technology/2018/dec/13/they-dont-care-facebook-fact-checking-in-disarray-as-journalists-push-to-cut-ties

Mac, R.: “Growth At Any Cost: Top Facebook Executive Defended Data Collection In 2016 Memo—And Warned That Facebook Could Get People Killed.” Buzzfeed.News . (2018). https://www.buzzfeednews.com/article/ryanmac/growth-at-any-cost-top-facebook-executive-defended-data

Mac, R., Silverman, C.: “Mark Changed The Rules”: How Facebook Went Easy On Alex Jones And Other Right-Wing Figures. BuzzFeed.News . (2021). https://www.buzzfeednews.com/article/ryanmac/mark-zuckerberg-joel-kaplan-facebook-alex-jones

Mainstreaming Extremism: Social Media’s Role in Radicalizing America: Hearings before the Subcommittee on Consumer Protection and Commerce of the Committee on Energy and Commerce, 116th Cong. (2020) (testimony of Tim Kendall)

Meade, A.: “Facebook greatest source of Covid-19 disinformation, journalists say”. The Guardian . (2020). https://www.theguardian.com/technology/2020/oct/14/facebook-greatest-source-of-covid-19-disinformation-journalists-say

Oremus, W.: The Big Lie Behind the “Pivot to Video”. Slate . (2018). https://slate.com/technology/2018/10/facebook-online-video-pivot-metrics-false.html

Propagating and Debunking Conspiracy Theories on Twitter During the 2015–2016 Zika Virus Outbreak, Michael J. Wood, Cyberpsychology, Behavior, and Social Networking. 21 (8), (2018). https://doi.org/10.1089/cyber.2017.0669

Rajagopalan, M., Nazim, A.: “We Had To Stop Facebook”: When Anti-Muslim Violence Goes Viral. BuzzFeed.News . (2018). https://www.buzzfeednews.com/article/meghara/we-had-to-stop-facebook-when-anti-muslim-violence-goes-viral

Rosalsky, G.: “Are Conspiracy Theories Good For Facebook?”. Planet Money . (2020). https://www.npr.org/sections/money/2020/08/04/898596655/are-conspiracy-theories-good-for-facebook

Silverman, C., Mac, R.: “I Have Blood on My Hands”: A Whistleblower Says Facebook Ignored Global Political Manipulation. BuzzFeed.News . (2020). https://www.buzzfeednews.com/article/craigsilverman/facebook-ignore-political-manipulation-whistleblower-memo

Stecklow, S.: Why Facebook is losing the way on hate speech in Myanmar. Reuters . (2018). https://www.reuters.com/investigates/special-report/myanmar-facebook-hate/

Stoller, M.: Facebook: What is the Australian law? And why does FB keep getting caught for fraud?. Substack. (2021). https://mattstoller.substack.com/p/facecrook-dealing-with-a-global-menace

The spread of true and false news online, Soroush Vosoughi, Deb Roy, Sinan Aral, Science. 359 (6380), 1146–1151. https://doi.org/10.1126/science.aap9559

The White House 45 Archived [@WhiteHouse45]: “These THUGS are dishonoring the memory of George Floyd, and I won’t let that happen. Just spoke to Governor Tim Walz and told him that the Military is with him all the way. Any difficulty and we will assume control but, when the looting starts, the shooting starts. Thank you!” [Tweet]. Twitter. (2020) https://twitter.com/WhiteHouse45/status/1266342941649506304

UNICEF.: UNICEF poll: More than a third of young people in 30 countries report being a victim of online bullying. (2019). https://www.unicef.org/press-releases/unicef-poll-more-third-young-people-30-countries-report-being-victim-online-bullying

Download references

Author information

Authors and affiliations.

Urvin AI, 413 Virginia Ave, Collingswood, NJ, 08107, USA

David Lauer

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to David Lauer .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Lauer, D. Facebook’s ethical failures are not accidental; they are part of the business model. AI Ethics 1 , 395–403 (2021). https://doi.org/10.1007/s43681-021-00068-x

Download citation

Received : 13 April 2021

Accepted : 29 May 2021

Published : 05 June 2021

Issue Date : November 2021

DOI : https://doi.org/10.1007/s43681-021-00068-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research
  • Harvard Business School →
  • Faculty & Research →
  • September 2019 (Revised September 2019)
  • HBS Case Collection

Facebook Fake News in the Post-Truth World

  • Format: Print
  • | Language: English
  • | Pages: 32

Related Work

  • Faculty Research
  • Facebook Fake News in the Post-Truth World  By: John R. Wells, Carole A. Winkler and Benjamin Weinstock
  • Newsletters

Site search

  • Israel-Hamas war
  • Home Planet
  • 2024 election
  • Supreme Court
  • TikTok’s fate
  • All explainers
  • Future Perfect

Filed under:

The Facebook and Cambridge Analytica scandal, explained with a simple diagram

A visual of how it all fits together. They’re now shutting down.

Share this story

  • Share this on Facebook
  • Share this on Twitter
  • Share this on Reddit
  • Share All sharing options

Share All sharing options for: The Facebook and Cambridge Analytica scandal, explained with a simple diagram

Cambridge Analytica, the political consulting firm that did work for the Trump campaign and harvested raw data from up to 87 million Facebook profiles, is shutting down .

There is a complicated web of relationships that explains how the Trump campaign, Cambridge Analytica, and Facebook are tied together, as my colleague Andrew Prokop explains in this excellent piece .

But if you need a refresher on how all the pieces fit together, this diagram helps make sense of it all.

1) Here’s the very simple version of the story

Facebook exposed data on up to 87 million Facebook users to a researcher who worked at Cambridge Analytica, which worked for the Trump campaign.

facebook case study summary

2) But how is the Trump campaign connected to Cambridge Analytica?

Cambridge Analytica was created when Steve Bannon approached conservative megadonors Rebekah and Robert Mercer to fund a political consulting firm. Bannon became vice president of Cambridge Analytica, and during the 2016 election, he reached out to the Trump campaign to introduce the two sides.

Bannon, of course, eventually became a senior adviser to Trump before he was fired in August 2017.

facebook case study summary

So what is the SCL Group, which does the work for Cambridge Analytica? It’s a public relations and messaging firm that has clients all around the world, and as Vox’s Andrew Prokop writes :

SCL tends to describe its capabilities in grandiose and somewhat unsettling language — the company has touted its expertise at ”psychological warfare” and “influence operations.” It’s long claimed that its sophisticated understanding of human psychology helps it target and persuade people of its clients’ preferred message.

This means, as the New York Times writes, Cambridge Analytica is basically a shell for the SCL Group.

3) How did Cambridge Analytica get its data?

Cambridge Analytica CEO Alexander Nix actually reached out to WikiLeaks founder Julian Assange about the emails that were hacked from the Democratic National Committee’s servers, according to the Wall Street Journal .

But the more important part of this story is how Cambridge Analytica got its data from Facebook. And according to a former Cambridge Analytica employee , the firm got it through researcher Aleksandr Kogan, a Russian American who worked at the University of Cambridge.

facebook case study summary

4) How did Kogan use Facebook to harvest up to 87 million user profiles?

Kogan built a Facebook app that was a quiz.

It not only collected data from people who took the quiz, but as my colleague Aja Romano writes, it exposed a loophole in Facebook API that allowed it to collect data from the Facebook friends of the quiz takers as well.

As Romano points out, Facebook prohibited the selling of data collected with this method, but Cambridge Analytica sold the data anyway.

facebook case study summary

Why this is a Facebook scandal more than a Cambridge Analytica one

Facebook founder and CEO Mark Zuckerberg wrote in a response to this scandal , “I’ve been working to understand exactly what happened and how to make sure this doesn’t happen again. The good news is that the most important actions to prevent this from happening again today we have already taken years ago. But we also made mistakes, there’s more to do, and we need to step up and do it.”

But former Facebook employees have said that there’s a tension between the security team and the legal/policy team in terms of how they prioritize user protection in their decision-making.

“The people whose job is to protect the user always are fighting an uphill battle against the people whose job is to make money for the company,” Sandy Parakilas, who worked on the privacy side at Facebook, told the New York Times .

Now, there is a decent chance Cambridge Analytica’s work didn’t actually do much to elect Trump; the firm’s reputation in the political consulting community is less than stellar .

But this highlights a larger debate over how much users can trust Facebook with their data. Facebook allowed a third-party developer to engineer an application for the sole purpose of gathering data. And the developer was able to exploit a loophole to gather information on not only people who used the app but all their friends — without them knowing .

Still, it’s Cambridge Analytica paying the price today after losing multiple clients after the last several months of unflattering publicity.

Will you support Vox today?

We believe that everyone deserves to understand the world that they live in. That kind of knowledge helps create better citizens, neighbors, friends, parents, and stewards of this planet. Producing deeply researched, explanatory journalism takes resources. You can support this mission by making a financial gift to Vox today. Will you join us?

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

facebook case study summary

The Cambridge Analytica Facebook scandal

  • The Facebook data breach wasn’t a hack. It was a wake-up call.
  • Why investigators think Cambridge Analytica might have helped Russia spread fake news
  • 9 questions about Facebook and data sharing you were too embarrassed to ask
  • The case against Facebook
  • “Psychographic microtargeting”: what’s bullshit and what’s legit 
  • Mark Zuckerberg on Facebook’s hardest year, and what comes next
  • Mark Zuckerberg runs a nation-state, and he’s the king
  • Read Mark Zuckerberg’s prepared statement for congressional testimony
  • Mark Zuckerberg has been apologizing for reckless privacy violations since he was a freshman
  • I was Zuckerberg’s speechwriter. “Companies over countries” was his early motto.
  • What the government could actually do about Facebook
  • Why we can’t trust Facebook to police itself
  • Lawmakers seem confused about what Facebook does — and how to fix it
  • Banks have to know their customers. Shouldn’t Facebook and Twitter?

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.

Thanks for signing up!

Check your inbox for a welcome email.

Oops. Something went wrong. Please enter a valid email and try again.

Facebook’s Cambridge Analytica Controversy Could Be Big Trouble for the Social Network. Here’s What to Know

T he fallout from Facebook’s data scandal involving Cambridge Analytica continues this week, as more information came to light confirming that at least 87 million Facebook users were impacted by hidden data harvesting — an update from the “ tens of millions ” figure that Facebook previously said were touched by its ongoing privacy crisis.

Facebook, which is the largest social media company in the world, admitted today that the number was much higher than previously believed at the bottom of a blog post written by Chief Technology Officer Mike Schroepfer.

“In total, we believe the Facebook information of up to 87 million people — mostly in the US — may have been improperly shared with Cambridge Analytica,” he wrote.

He laid out nine ways Facebook is now working on to better protect user information , saying that the changes will limit the ways apps are allowed to collect and share people’s information.

Third party apps will now be restricted from accessing certain kinds of user information they could previously collect from Facebook features like Events, Groups and Pages. Other changes include updates to the ways third-party apps can collect data related to logins for things like “check-ins, likes, photos, posts, videos, events and groups,” the company’s statement reads.

It also says that apps will no longer be allowed to collect personal data such as “religious or political views, relationship status and details, custom friends lists, education and work history, fitness activity, book reading activity, music listening activity, news reading, video watch activity, and games activity.”

The social media juggernaut also announced that it has disabled certain features in “search and account recovery” to prevent people’s public profiles from being scraped by “malicious actors.” It is also completely shutting down its Partner Categories, which is “a product that lets third-party data providers offer their targeting directly on Facebook,” the statement says.

A new feature is also being added to everyone’s newsfeed — a link at the top of the page that will allow users to see what information apps they use have collected about them, and also allow users to remove those apps if they choose. Facebook pledged to alert those users whose personal data was improperly collected by Cambridge Analytica.

Facebook also posted a link to updated policies for Instagram , which it owns.

While the users affected are mainly in the U.S., the BBC has also reported that about one million of the 87 million users impacted are based in the U.K.

Facebook’s announcement that almost 90 million users were affected comes on the heels of the news that CEO Mark Zuckerberg will testify before Congress on April 11.

The drama began when the $500 billion company admitted earlier in March that data analysis firm Cambridge Analytica, which has close ties to President Trump’s election campaign and right-leaning megadonors, used data that had been collected from millions of users without their consent. Facebook has since suspended Cambridge Analytica’s access to its platform.

Facebook continues to take a beating from commentators and investors alike as its stock keeps plunging — the company’s market cap dropped $50 billion alone during first week that the scandal came to light, becoming its largest ever two-day drop . Meanwhile, lawmakers in the U.S. and the U.K. who demanded Zuckerberg explain his company’s practices may finally get some answers during his testimony next week.

Here’s what to know about Facebook’s latest crisis.

What is Cambridge Analytica?

Cambridge Analytica is a political analysis firm that claims to build psychological profiles of voters to help its clients win elections. The company is accused of buying millions of Americans’ data from a researcher who told Facebook he was collecting it strictly for academic purposes. Facebook allowed Aleksandr Kogan, a psychology professor at the University of Cambridge who owns a company called Global Science Research, to harvest data from users who downloaded his app. The problem was that Facebook users who agreed to give their information to Kogan’s app also gave up permission to harvest data on all their Facebook friends, as well, according to the Guardian.

The breach occurred when Kogan then sold this data to Cambridge Analytica, which is against Facebook’s rules. Facebook says it has since changed the way it allows researchers to collect data from the platform as a result.

Christopher Wylie, a whistleblower who worked at Cambridge Analytica before quitting in 2014, claimed on NBC’s Today Show Monday morning that the firm was “founded on misappropriated data of at least 50 million Facebook users.”

Wylie added that Cambridge Analytica’s goal was to establish profiling algorithms that would “allow us to explore mental vulnerabilities of people, and then map out ways to inject information into different streams or channels of content online so that people started to see things all over the place that may or may not have been true.”

The data firm initially told British Parliament it did not collect people’s information without their content during a hearing in February, but later admitted in a statement to the New York Times that they did in fact obtain the data, though the company claims to have deleted the information as soon as it found out it violated Facebook’s privacy rules.

Cambridge Analytica issued a number of press releases in the days following the explosive media reports, saying that it “strongly denies the claims” it acted improperly.

“In 2014 we received Facebook data and derivatives of Facebook data from another company, GSR, that we engaged in good faith to legally supply data for research,” the statement reads. “After it subsequently became known that GSR had broken its contract with Cambridge Analytica because it had not adhered to data protection regulation, Cambridge Analytica deleted all the Facebook data and derivatives, in cooperation with Facebook… This Facebook data was not used by Cambridge Analytica as part of the services it provided to the Donald Trump presidential campaign.”

Facebook also issued a statement on its website Monday saying that the claim there was a data breach is “completely false” and Facebook users “gave their consent” when they signed up for certain kinds of apps, like the one Kogan exploited for data collection purposes. The social media juggernaut also maintained that “no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked.”

Who is the Cambridge Analytica whistleblower?

Christopher Wylie, a former employee of Cambridge Analytica, spoke out about the firm’s practices on the Today Show Monday morning after previously giving an interview to the New York Times. Wylie, who quit the company in 2014, said he believes it’s important for Americans to know what companies are doing with their personal information, as well as whether Cambridge Analytica’s practices influenced the democratic process.

“This was a company [Cambridge Analytica] that really took fake news to the next level by powering it with algorithms,” he said in an interview on the Today Show Monday morning.

Watch @savannahguthrie 's full interview with Cambridge Analytica whistleblower Christopher Wylie pic.twitter.com/NMbHoOkDWA — TODAY (@TODAYshow) March 19, 2018

Wylie also claimed that Cambridge Analytica has been in talks with Russian oil companies and employs a psychologist who works on Russia-funded projects. Any ties between Cambridge and Russia could complicate matters for Facebook, which has spent the past several months grappling with accusations that Moscow used it and other social media networks to meddle in the 2016 U.S. elections.

In a statement, Cambridge Analytica said Wylie left the company to found a rival firm.

“Their source is a former contractor for Cambridge Analytica – not a founder as has been claimed – who left in 2014 and is misrepresenting himself and the company throughout his comments,” the company said.

What is Cambridge Analytica’s connection to Steve Bannon?

Onetime Trump campaign advisor and Former White House Chief Strategist Steve Bannon was previously vice president of Cambridge Analytica’s board, according to the New York Times. Wylie told the Guardian that Bannon was his boss at Cambridge Analytica. Bannon has been involved in propping up right-wing political groups for years, having been the executive chairman and co-founder of Breitbart News, a far right-wing digital publication, until he stepped down from the position in January.

Additionally, Republican megadonor and onetime Breitbart News CEO Robert Mercer , who has funded numerous conservative campaigns at every level of government, invested $15 million in Cambridge Analytica. His daughter, Rebekah Mercer was also a board member of the political data firm. The Mercers originally supported Ted Cruz’ presidential campaign, but became patrons of the Trump campaign after Cruz bowed out of the 2016 presidential race.

The Times reported that through their family foundation the Mercer’s have donated more than $100 million to conservative causes — $10 million of which went to Breitbart News, and another $6 million that went to the Government Accountability Institute, a nonprofit founded by Bannon.

What does Mark Zuckerberg say?

Facebook executives responded to the crisis on Wednesday by issuing statements on the social media platform.

Zuckerberg admitted that Facebook made mistakes and acknowledged that his company failed to responsibly protect the data of customers.

He gave a timeline explaining how the improper data harvesting occurred, and said that in 2014 the company changed its practices to limit the ability of “abusive apps” to collect data from users and their other Facebook friends who did not give consent.

“In 2007, we launched the Facebook Platform with the vision that more apps should be social…To do this, we enabled people to log into apps and share who their friends were and some information about them….In 2013, a Cambridge University researcher named Aleksandr Kogan created a personality quiz app. It was installed by around 300,000 people who shared their data as well as some of their friends’ data. Given the way our platform worked at the time this meant Kogan was able to access tens of millions of their friends’ data.”

Zuckerberg also acknowledged that journalists informed Facebook as early as 2015 that Kogan shared this data with Cambridge Analytica, and said the company subsequently banned Kogan’s apps from the social network because they violated Facebook policies.

“This was a breach of trust between Kogan, Cambridge Analytica and Facebook. But it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it. We need to fix that,” he wrote on Facebook.

He also said the company will investigate all apps that had “access to large amount of information” before the 2014 policy changes, and that Facebook plans to further restrict developers’ access to Facebook users’ data moving forward. The company will also make it easier for users to deny permission to third party developers that collect their personal information. As part of this effort, the company plans to move its privacy tool to the top of the News Feed.

Facebook’s Chief Operating Officer Sheryl Sandberg shared Zuckerberg’s post on her own Facebook page , saying she “deeply regrets” that the company did not do more to address the problem. Facebook will also start to ban developers who misuse “personally identifiable information” and alert users when Facebook learns their data has been misused, she wrote.

More Must-Reads From TIME

  • The 100 Most Influential People of 2024
  • Coco Gauff Is Playing for Herself Now
  • Scenes From Pro-Palestinian Encampments Across U.S. Universities
  • 6 Compliments That Land Every Time
  • If You're Dating Right Now , You're Brave: Column
  • The AI That Could Heal a Divided Internet
  • Fallout Is a Brilliant Model for the Future of Video Game Adaptations
  • Want Weekly Recs on What to Watch, Read, and More? Sign Up for Worth Your Time

Contact us at [email protected]

COMMENTS

  1. Facebook case study

    It's incredible that Facebook now has more than a billion monthly active users worldwide, yet has fewer than 5,000 employees. In this case study which we aim to keep up-to-date between the new editions of my books, I have structured the review of Facebook strategy using some of the customer-related aspects of the Business Model Canvas.

  2. 11 Facebook Case Studies & Success Stories to Inspire You

    A case study will often go over a brand's marketing challenge, goals, a campaign's key details, and its results. This gives you a real-life glimpse at what led a marketing team to reach success on Facebook. Case studies also can help you avoid or navigate common challenges that other companies faced when implementing a new Facebook strategy.

  3. An Overview of Facebook's Journey to Meta

    Facebook is a soc ial network that has millions o f users, connecting with each oth er around the globe. It. was origi nally launched in the name "Face Mash" in October 2003 and afterwards ...

  4. Key takeaways from Facebook's whistle-blower hearing.

    Frances Haugen, a former Facebook employee, said during testimony before the Senate that foreign governments, including China and Iran, were using the platform to conduct surveillance and ...

  5. Social Media & Privacy: A Facebook Case Study

    Globally, the website h as over 968 million. daily users and 1.49 billion monthly users, with nearl y 844 million mobile daily users and. 3.31 billion mobile monthly users ( See Figure 1 ...

  6. Facebook: A Case Study of Strategic Leadership

    In summary, the future success of Facebook, or any social network for that matter, ... FACEBOOK: A CASE STUDY OF STRATEGIC LEADERSHIP 35 . Miller, C. (2007, July 23). Social networking class war ...

  7. Everything You Need to Know About Facebook's Controversial ...

    The study's first author, Adam Kramer of Facebook, confirms ---on Facebook, of course---that they did indeed want to investigate the theory that seeing friends' positive content makes users ...

  8. Facebook-Cambridge Analytica data scandal

    The Facebook-Cambridge Analytica data scandal also received media coverage in the form of a 2019 Netflix documentary, The Great Hack. ... In August 2022, Facebook agreed to settle a lawsuit seeking damages in the case for an undisclosed sum. In December 2022, Meta Platforms agreed to pay $725 million to settle a private class-action lawsuit ...

  9. Cambridge Analytica and Facebook: The Scandal and the Fallout So Far

    Zuckerberg speaks to lawmakers. In a hearing held in response to revelations of data harvesting by Cambridge Analytica, Mark Zuckerberg, the Facebook chief executive, faced questions from senators ...

  10. Everything We Know About Facebook's Secret Mood ...

    The study found that by manipulating the News Feeds displayed to 689,003 Facebook users, it could affect the content that those users posted to Facebook. More negative News Feeds led to more ...

  11. How to write a social media case study (with template)

    Social media case study template. Writing a case study is a lot like writing a story or presenting a research paper (but less dry). This is a general outline to follow but you are welcome to enhance to fit your needs. Headline. Attention-grabbing and effective. Example: "How Benefit turns cosmetics into connection using Sprout Social" Summary

  12. Facebook and Cambridge Analytica: What You Need to Know as Fallout

    Facebook, already facing deep questions over the use of its platform by those seeking to spread Russian propaganda and fake news, is facing a renewed backlash after the news about Cambridge Analytica.

  13. Advertising Case Studies: Find Inspiring Brand Success Stories

    Learn from these inspirational local, regional and global advertising case studies and success stories. Read marketing case studies and success stories relevant to your business. Featured Success Stories

  14. Facebook data privacy scandal: A cheat sheet

    The rules were first rolled out in the US in May. On October 25, 2018, Facebook was fined £500,000 by the UK's Information Commissioner's Office for their role in the Cambridge Analytica ...

  15. Facebook—Can Ethics Scale in the Digital Age?

    Abstract. Since its founding in 2004, Facebook has built a phenomenally successful business at global scale to become the fifth most valuable public company in the world. The revelation of Cambridge Analytica events in March 2018, where 78 million users' information was leaked in a 2016 U.S. election cycle, exposed a breach of trust/privacy ...

  16. Facebook: A Case Study of Strategic Leadership

    Facebook founder and CEO, Mark Zuckerberg, had a vision and offered a social network with a clean design and a better user experience. Facebook's unique focus on relationship management also enticed users to visit more often, and stay longer when they did visit, which drew advertisers willing to invest aggressively.

  17. Facebook's ethical failures are not accidental; they are part of the

    Facebook's failure to check political extremism, [] willful disinformation, [] and conspiracy theory [] has been well-publicized, especially as these unseemly elements have penetrated mainstream politics and manifested as deadly, real-world violence.So it naturally raised more than a few eyebrows when Facebook's Chief AI Scientist Yann LeCun tweeted his concern [] over the role of right ...

  18. Facebook Fake News in the Post-Truth World

    Zuckerberg dismissed these claims as "crazy," asserting that Facebook was a technology company, not a media company. In 2017, it was revealed that a Russian intelligence team had purchased 3,000 political ads on Facebook in an attempt to influence the 2016 U.S. Presidential Election. Then, in March 2018, the public learned that Cambridge ...

  19. The Facebook and Cambridge Analytica scandal, explained with a ...

    Part of The Cambridge Analytica Facebook scandal. Cambridge Analytica, the political consulting firm that did work for the Trump campaign and harvested raw data from up to 87 million Facebook ...

  20. What to Know About Facebook's Cambridge Analytica Problem

    Facebook also posted a link to updated policies for Instagram, which it owns.. While the users affected are mainly in the U.S., the BBC has also reported that about one million of the 87 million ...

  21. Case Study: The WhatsApp Acquisition & Facebook Privacy Promises

    The settlement required Facebook to take several steps to make sure it lives up to its promises in the future, including giving consumers clear and prominent notice and obtaining consumers ...

  22. Facebook's emotional contagion study and the ethical problem of co

    The Facebook emotional contagion experiment, in which researchers manipulated Facebook's news feed by, among other things, showing fewer positive posts to see if they would lead to greater user expressions of sadness, raises obvious as well as non-obvious problems (Kramer et al., 2014).Once reporters cycled through the obligatory "creepy as usual" stories about Facebook's emotion ...