I wrote a couple weeks ago about Facebook's latest debacle, and ended it by deleting my data and disabling my account, which I did. I was curious if I'd miss it at all, so I waited before deciding whether or not to pull the plug for good - good as in "final", good as in "mental health", good as in "no longer part of their experiments".

I've decided to make it permanent, esp after doing some research. So why did I ultimately dump Facebook? It's not because I think my 1 account in 2 billion will make much of a difference - I just don't want to be involved in their antics and screwups anymore. I can't in good conscience use a site as careless as Facebook. They've got me thinking a lot about the hidden cost of free, in general.

Aside from that, I'll let the numerous articles going all the way back to the very beginning of Facebook tell the story for me. I'll add to this list when I catch FB in the news again... so fairly often I'm sure.


2018

Hackers gaining access 50 million accounts through "View As" feature

On September 28, a bug in the "View As" feature granted hackers complete access to up to 50 million users' accounts. Kudos to Facebook for patching and publicizing it within a few days of discovering it, but it sounds like the hackers had access to even more than Cambridge Analytica had... they were essentially logged in as those other users. Even worse, they had access to any third-party site those users logged into using their Facebook accounts too.

Posting the private posts of 14 million users' as public

It was reported in June that Facebook accidentally exposes 14 million users' private posts while testing a new feature. Even if users expected their posts to be private, they were all posted as public. Facebook fixed it, but they can't "fix" people reading something that should've been private, nor can they "fix" sites that may have archived those public messages before they were made private again.

Allowing manufacturers to continue to syphon data even after Cambridge Analytica

In June 2018, it was reported that Facebook has ongoing partnerships with device manufacturers through which they continue to share loads of data about the users of the device and anyone they're connected to - even as they promised to clamp down on access after the Cambridge Analytica incident that surfaced 3 months prior.


2015

Allowing company to syphon data of 50 million users

In March 2018, it was discovered that a company syphoned the data of 50 MILLION users through Facebook's API in 2015 - and they did it legitimately. They only screwed up (according to Facebook) when they handed the data to someone else. Facebook knew about it for years but never disclosed the full extent, until someone blew the whistle.


Testing out using your location to improve your 'suggested friends' list

In mid 2016, it was discovered that Facebook tested using users' location information to improve the 'suggested friends' list. The author stumbled onto this information when she wrote an incorrect piece accusing Facebook of using everyone's location data to improve the lists, to which Facebook quickly responded by admitting it had just been part of a four-week test at the end of 2015.

Still, it's creepy to think that Facebook (really, any app) is able to do this using your mobile device's location - and it's why you should check the list of permissions an app requests very carefully.


2012

Sharing your purchases with friends, without telling you

For a couple months, Facebook made certain offers to 1.2 million users. If you accepted the offer, then your friends were notified (half the time without your knowledge or consent) to see whether it influenced whether they accepted it too.


Adding negative posts to your feed to affect your emotions

It was discovered that in 2012 (reported in June 2014), Facebook ran a large scale experiment with the emotions of 700,000 users to see what effect more positive or negative posts had on emotional state. You can read more about the findings on PNAS, and the (pre?)2012 study that probably led to it.


Collecting the messages you DIDN'T post

Facebook ran an experiment for 2 weeks in July 2012, collecting data on how 5 million users censored themselves before posting a message. They collected nearly everything you typed even if you didn't post it, or if you edited it before posting. In other words, just by typing into their text box you were sharing data with them - even if you never pressed the "post" button.


2010

Comparing your public voting record to your actions on Facebook

Facebook ran an experiment with 61 million users, showing an I Voted button and comparing it to your actual voting record to determine how the presence of the button might affect your (and your friends') actual actions. That's right - they looked up your voting record for the purpose of their "research".


Facebook ran an experiment with 253 million users between Aug - Oct, randomly preventing shared links from showing in friends' feeds to see how it affected further sharing. Even if you thought you were sharing something with your friends, it's probable Facebook was effectively censoring you.


2009

In July 2009, Facebook reworked its privacy settings, ostensibly to make them easier to find, but in reality they made it easy for users to accidentally share even more data by providing "recommended" settings that switched things back to public.

They even removed some useful privacy settings altogether, like the ability to not share data through their API. Interestingly, that's the exact kind of setting that would've prevented the abuses that allowed a company to collect 50 million unsuspecting users' data in 2015.


2008

Promising users that "verified" apps were more secure, but never actually checking them

To increase user trust in apps, Facebook launched a verification program in Nov 2008. They claimed to be certifying apps for security, and awarded "verified" badges to those apps, but the FTC later determined that Facebook was not. Who knows how many users used an app that Facebook assured them was safe, when in fact they were not.

You can read more about what the FTC found in their docs for Facebook, Inc. Page 15 of the complaint states that, "before it awarded the Verified Apps badge, Facebook took no steps to verify either the security of a Verified Application's website or the security the Application provided for the user information it collected, beyond such steps as it may have taken regarding any other Platform Application." In other words, the program didn't do what it promised at all.


2007

In Nov 2007, Facebook released a feature called Beacon. Any site could inject a 1x1 pixel into their code to send data about user activity back to Facebook, who could then pair it up with actual Facebook users somehow (via cookies I guess).

Imagine, you log into Facebook and "like" a few posts by friends, then leave the site. You spend the rest of the day (or week or whatever) surfing the web, making purchases... just doing your thing. Clueless to the fact that random sites are sending your actions to Facebook, who then knows everything you've been doing. Creepy enough, but then they shared that activity with the rest of the network too.

It was opt-in by default, and (shocked face) it was not well received.


Sources

One more thought

Hey, you got this far so here's one more video. I don't get why The Onion of all possible sources produced it, but it hits on another point about social media that bothers me. What happens to all our little future adults when they realize they've basically been living some strange version of The Truman Show?