I have stopped participating on Facebook. I’m leaving my account live (so that my post about why I’m leaving is visible), but everything will be shut off as much as possible, and the rest will be ignored. No Messenger, no more posts on my timeline, no notifications, no tagging, etc.
I’ll be spending more time on LinkedIn and Twitter. I hope you’ll follow those pages, or use the Subscribe form on the right side of my blog page.
This isn’t an easy decision because it will be harder to keep in touch with everyone in my life, not least my family (including famous daughter and grandchild) and the many friends I’ve made in my travels. But I’ve decided we must stand up.
The rest of this post explains why; if you don’t need that info, ignore it – but please keep in touch.
I’ve concluded that Facebook is incompetent about security of our data and irresponsible about the side effects of what happens when marketers, bots, and monitors interact with the site. It allows (or fails to stop) unscrupulous behavior by unseen marketers, behind the scenes or even posing as members of patient groups.
In my opinion none of us should entrust a single bit of patient information to Facebook. Of course it’s up to you: you may want to stay, all things considered, and I support you in doing what you want. But be aware of what could be going on behind the curtain.
I’ll discuss three areas that have multiple evidence points.
1. Covert marketing within patient groups
For most of us, if someone is secretly selling on Facebook it may be merely annoying. But in some cases these people have done really bad things with patient groups.
- Facebook has secretly let marketers scrape (copy from the screen) the names of members of private (“closed”) cancer groups, including breast cancer (see Facebook allowed third party marketers to download names of people in private groups), and then said they didn’t realize it was happening, even though the marketers were using a browser extension provided by FB.
- Or this revolting story subtitled Huge groups of vulnerable people looking for help are a rehab marketer’s dream, about an addiction support group containing members who are covert marketers … and one woman who got banned from the group for calling them on it, leaving her without the support (for her troubled son) that led her to join.
Treating people this way when they have any kind of medical or mental health problem is flat-out predatory, and I believe patients should be aware that they might want to stay away. I would. (I won’t say “should stay away” because that’s a personal choice. But I won’t stand for it being in dark alleys.)
Go to a legitimate, above-board patient site like SmartPatients.com or PatientsLikeMe or Inspire.com. They’re free, too! But, update: on Twitter, user Anita Figueroas said “sites like [Inspire] limit our outreach (links to our website aren’t allowed).”
2. Incompetence at security – and burying the evidence
An especially bad case of skullduggery and self-interest happened last July, when Wall Street was rattling swords at Facebook because FB had not been truthful to investors about the Cambridge Analytica election scandal: SEC Probes Why Facebook Didn’t Warn Sooner on Privacy Lapse (Wall Street Journal). (It’s one thing to mess with the public, but mess with Wall Street and s4!t gets serious, eh?)
Coincidentally, right when that happened, a thriving private FB #MeToo group of 15,000 sexual abuse survivors got hacked by trolls (see the Wired article How a Facebook group for sexual assault survivors became a tool for harassment), who proceeded to post vicious sexual images to certain members, privately or publicly in that group. When the admins reported it to FB, FB didn’t investigate – without warning they ERASED THE WHOLE GROUP, destroying all the evidence – not to mention all the group’s past conversations, networks of contacts, etc.
The company has gone too far, to the point where it’s time to walk away.
3. Incompetence and haphazard management of hate speech issues
Clearly, after the scandals around the 2016 elections and alt-right hate problems, Facebook needed to do something about all the fraudulent accounts and hate speech they were allowing. But rather than figuring out an approach that could have been costly – actually being careful about rules – they went for cheap and sloppy, because “careful” ain’t cheap. The result has been so dishearteningly inept that it helped nail the coffin on whether I could tolerate being there.
It’s summed up in two articles about how they’re clumsily handling censorship vs freedom of speech – a very delicate issue in these times, which they’re trying to handle by sending disorganized rules created by random people everywhere to cheap call center personnel, in the form of PowerPoint slides!
- June 2017: Facebook’s Secret Censorship Rules Protect White Men From Hate Speech But Not Black Children (Yes it literally says that; read it. An interesting contrast to the perception that Silicon Valley is reflexively left-wing.)
- Nov 2018, Rolling Stone: “Who will fix FB?” including a sample story of a guy whose legit website got banned from FB as collateral damage during a sweep intended to erase frauds … it seems nobody checked whether the rules were working as intended! That is WICKED bad in a software company. Blind, unthinking execution of rules written by someone somewhere, carried out (the article suspects) by workers in low-priced overseas call centers. And nobody checking.
The decision to actually leave Facebook started in mid-December. (It had come up several times, but throughout 2018 it got worse and worse.) Then, right after Christmas this came out:
- 12/27/18, NYTimes: an employee leaked the 1400 page rulebook FB’s censors are supposed to use. Inside Facebook’s Secret Rulebook for Global Political Speech
The leaker said FB “was exercising too much power, with too little oversight — and making too many mistakes.” Mistakes like that can cause harm; harm that happens entirely because the company is being reckless.
Beware of technology carelessly used
in the pursuit of large-scale automated profits
A basic reason why business loves automation is that human intervention is costly. “It doesn’t scale,” as they say. (Specifically, to do more of it, you have to hire and train more people, pay them benefits, etc. Silicon Valley likes things you can program into a system and sell to 100 or six billion people at the same cost.)
I love automation as much as anyone (it’s been my whole career), but there are limits: you have to check that the robots aren’t going insane. Especially in cases where harm can result. Like driverless cars. Or healthcare.
Some things truly require human judgment.
Other big tech companies are getting too big and irresponsible for their britches – e.g. Amazon wants to sell its “Rekognition” face recognition software to the TSA, even though (USA Today, July) it misidentified 28 members of Congress in an ACLU test. The software said those 28 faces matched a database of arrest photos!
Are you eager to walk through that software for TSA, at your next flight? Especially if you’re not Caucasian: “Nearly 40 percent of Rekognition’s false matches in our test were of people of color, even though they make up only 20 percent of Congress.” [ACLU]
Note: TSA hasn’t bought Rekognition yet, but USA Today says local law enforcement agencies already have. Do they have I.T. experts who can adjust and evaluate such new technology??
You should have exactly this kind of worry about anyone who’s touting some amazing “AI” (artificial intelligence) as the next miracle. AI is powerful and beginning to do great things – but it must be monitored and checked for unintended harms, or the robots truly will do large-scale harm in our civilization.
Some of the investor-oriented tweets and posts I’ve seen don’t care a thing about whether the stuff is accurate – “Hey, it’s NEW! It’s gonna be great! Don’t miss out – buy some today!”
Not me – not unless a thinking human is doing a sanity check on whether it gives accurate answers.
And that’s exactly what’s missing in Facebook’s irresponsible management of group security, covert marketers, and censorship vs free speech vs hate speech.
It’s often said that with great power comes great responsibility. Actions like FB and Amazon’s go way too far, and the last straw to me was the increasingly clear picture that Facebook truly isn’t going to let the risk of harm to others slow them down.
That would be irresponsible in any walk of life; in criminal law it’s called negligence. In healthcare (where I try to lead) it especially crosses the line into “must not be tolerated” territory.
So, Facebook: as they say on Shark Tank: I’m out.
Additional reading:
- Ars Technica, March 2018: Facebook scraped call, text message data for years from Android phones
- “The company also writes that it never sells the data and that users are in control of the data uploaded to Facebook. This “fact check” contradicts several details Ars found in analysis of Facebook data downloads and testimony from users who provided the data.”
- USA Today, April 2018: How Facebook can have your data even if you’re not on Facebook (Did you know FB collects data – totally without permission – on people who never even signed up for FB??)
- Washington Post, Nov 2018: Something really is wrong on the Internet. We should be more worried.