Thursday, September 12, 2019

China's Biggest Propaganda Agency Buys Ads on Facebook and Twitter to Smear Protesters in Hong Kong


China’s largest state-run news agency, Xinhua News, is buying ads on Facebook and Twitter to smear protesters in Hong Kong, a new tactic being used to influence how the rest of the world perceives the pro-democracy demonstrators.
An estimated 1.7 million people in Hong Kong, roughly a quarter of its population, took to the streets on Sunday to denounce Beijing’s attempts to interfere in the semi-autonomous territory. But China has amassed soldiers across the border in Shenzhen and appears to be stepping up its propaganda efforts online through paid ads on Facebook and Twitter, as well as unpaid content on platforms like YouTube.
Xinhua News currently has five different Facebook ads that directly relate to the unrest in Hong Kong, and all of the ads started running on Sunday, August 18. One of the Facebook ads addresses Democratic House Speaker Nancy Pelosi directly, calling on her to “fly to Hong Kong to see what the true facts are.” Pelosi has been critical of the Chinese government’s suppression of the demonstrators and called Beijing’s actions “cowardly.”
The anti-Pelosi Facebook ad uses viral video from an Australian traveler who was recently inconvenienced at the Hong Kong International Airport. Protesters helped shut down the airport over the course of two days, demanding freedom and apologizing to travelers for disrupting their flights. The Australian traveler, who appears to have given an interview to Chinese state media, told pro-democracy demonstrators they should “get a job” and even said they should “know their place,” though the latter isn’t featured in the Facebook ad.
The Australian traveler also said that “Hong Kong is a part of China,” something that’s controversial because Hong Kong currently operates under a “one country, two systems” arrangement. That arrangement allows Hong Kong to temporarily maintain democratic laws and traditions until the year 2047. That year is obviously well within the lifetimes of many young protesters, and has contributed to the young-old divide in the region. Some elderly Hongkongers have been the most outspoken against the protests, something that becomes clear in the pro-Beijing Facebook ads.
Another Facebook ad from Xinhua claims that Hong Kong’s economy is suffering over the protests and insists that the public wants someone to “restore order.” The ad shows pro-Beijing demonstrators calling for an end to the violence, heavily implying that it’s the protestors who have caused the most harm.
In reality, Hong Kong police have been the ones causing the most violence on the ground, shooting “nonlethal” rounds at point blank range, and firing tear gas regularly into crowds of non-violent protesters. One woman recently lost an eye after being shot by police, leading some allies to wear a bandage over their own eyes in a sign of solidarity. Xinhua’s propaganda video has also been posted to YouTube, though it’s unclear if the propaganda agency is buying paid ads on the platform.
Another propaganda ad from Xinhua focuses on the economic situation in Hong Kong. The ad shows photos of empty shopping malls with a caption that, yet again, calls for “order” to be restored—an ominous declaration from an authoritarian government like China’s, which currently holds anywhere from 800,000 to 3 million Muslims in concentration camps.
Xinhua is also promoting Twitter posts that suggest the violence is being perpetrated by the protesters and claims, again, that “order should be restored.” There’s obviously a pattern to all of this and Beijing wants to control the narrative by insisting that “order” is more important than democratic rights. And it’s no wonder why, with so much money at stake.
Another Chinese state media outlet, CGTN, even posted an embarrassing anti-democracy rap video to Twitter over the weekend that ends with President Donald Trump saying Hong Kong is part of China.

Trump has previously been reluctant to criticize China over its anti-democratic crackdown and now Chinese propagandists are using his own words against the protesters.
The protests in Hong Kong have been raging for eleven weeks now, initially set off by an extradition bill that would have allowed Beijing to snatch so-called criminals from Hong Kong. The big problem, however, is that Hong Kong is a haven for political dissidents and pro-democracy leaders in Asia. The extradition bill has been withdrawn by Hong Kong Chief Executive Carrie Lam, but the protesters want assurances that it won’t be reintroduced.
China’s Xinhua News has been on Facebook since 2012, despite the fact that Facebook is banned in mainland China. Twitter and YouTube are also banned, but the intended audience of these ads is clearly the international community. Protest organizers in Hong Kong have purchased their own ads in international newspapers, according to the Hong Kong Free Press, but it’s not clear whether they’re buying ads online as well.
The print ad, which appeared today in the New York Times and Canada’s Globe and Mail, among others, reads in part:
Amid tear gas and rubber bullets, this once vibrant and safe metropolis is at a crossroads. Since the protests against the controversial extradition bill started in June, Hong Kong’s autonomy and freedom have been eroded beyond recognition. This is the ugly truth that the Hong Kong government does not want you to know: Hong Kong is becoming a police state.
Instead of implementing political reform as promised, the Hong Kong government has turned into an apparatus of repression. Police brutality, endorsed by both the Hong Kong and Chinese governments, has now become part of our daily lives.
In the name of public order, the police dehumanize protesters as ‘cockroaches’ and deploy certain anti-riot measures prohibited by international standards. The police also batter passers-by, journalists and medical personnel. Police stations are shut whenever alleged thugs-for-hire indiscriminately attack protesters and ordinary citizens.
Arbitrary arrests and political prosecutions are becoming increasingly common. These are all tactics of the Hong Kong government to intimidate its own people into silence.
Bear witness to Hongkongers’ fight for freedom. Tell our story—especially if we can no longer do it ourselves. Fight For Freedom. Stand With Hong Kong.
It’s not clear how much money Facebook and Twitter are making from the Chinese propaganda ads and the tech giants did not respond to requests for comment this morning. We will update this article if we hear back.

Report: Facebook Content Mods Say Company Therapists Were Pressured to Share Session Details


Adding to an already ridiculously long list of complaints, now Facebook’s content moderators say a higher-up asked company-appointed counselors to share information from their sessions, according to a new report from the Intercept.
Numerous investigations have described this workforce as notoriously underpaid and overworked in crappy working conditions that require them to scan through some of the most disturbing posts the internet can offer. You know, all the things it might behoove someone to see a therapist about.
This most recent criticism comes from a site in Austin, Texas, led by Accenture, an independent contractor Facebook hired to oversee 1,500 of its content moderators. Accenture and Facebook also employ trauma counselors, a.k.a. “wellness coaches,” to help staff cope after screening all that potentially graphic content to judge whether it violates the company’s terms of service.
But while both parties involved in these counseling sessions understood them to be private, a letter written by several whistleblowers claims that Accenture has made several attempts since July to review what was discussed. This letter, published by the Intercept with potentially identifying information redacted, reads in part:
It has come to our attention that an Accenture [manager] pressured a WeCare licensed counselor to divulge the contents of their session with an Accenture employee. The counselor refused, stating confidentiality concerns, but the [manager] pressed on by stating that because this was not a clinical setting, confidentiality did not exist. The counselor again refused. This pressuring of a licensed counselor to divulge confidential information is at best a careless breach of trust into the Wellness program and, at worst, an ethics and possible legal violation.
What exactly this Accenture executive wanted to know isn’t clear, the Intercept reported. The letter calls for this manager’s removal and claims at least one therapist resigned after being pressured to reveal information disclosed during one of these counseling sessions.
An outsourcing manager later told employees that Facebook had conducted an internal investigation into the matter and found “no violation or breach of trust between our licensed counselors and a contracted employee,” per the Intercept, though the incident did prompt the company to “refresh” the team’s “wellness coaches” on what they “can and can’t share.”
When Gizmodo reached out to Facebook about this report, a spokesperson reiterated the same company statement it provided the Intercept, which you can read below:
“All of our partners must provide a resiliency plan that is reviewed and approved by Facebook. This includes a holistic approach to wellbeing and resiliency that puts the needs of their employees first. All leaders and wellness coaches receive training on this employee resource and while we do not believe that there was a breach of privacy in this case, we have used this as an opportunity to reemphasize that training across the organization.”
Accenture also added the following in the report:
“These allegations are inaccurate. Our people’s wellbeing is our top priority and our trust-and-safety teams in Austin have unrestricted access to wellness support. Additionally, our wellness program offers proactive and on-demand counseling and is backed by a strong employee assistance program. Our people are actively encouraged to raise wellness concerns through these programs. We also review, benchmark and invest in our wellness programs on an ongoing basis to create the most supportive workplace environment – regularly seeking input from industry experts, medical professionals and our people.”
After coming under fire for other employee criticisms, Facebook announced in May that it would be improving pay and benefits for a portion of its content moderators. However, a recent Verge article covering a Tampa, Florida site (the same one where an employee purportedly died at his desk) still described a grim and chaotic workplace, indicating—along with this new Intercept report—that Facebook may still be ignoring the root of its moderator problem.

Instagram Boots Ad Partner HYP3R for Reportedly Scraping Huge Amounts of User Data


Instagram has banned one of its owner Facebook’s official marketing partners, San Francisco-based HYP3R, after “a combination of configuration errors and lax oversight” on its behalf allowed HYP3R to scrape massive amounts of data on Instagram users, Business Insider reported on Wednesday.
HYP3R, which has raised tens of millions of dollars in funding, relies on tracking social-media posts tagged in real-world locations, then allowing its marketing clients to interact with the users who uploaded them (say, to address complaints about service) or use that data for targeted advertising purposes. But following the fallout of the Cambridge Analytica data-harvesting scandal at Facebook in early 2018, Instagram began disabling some parts of its API—including location tools. According to Business Insider, while HYP3R publicly supported the decision, it also created tools meant to continue scraping that data in ways that took advantage of Instagram’s sloppy implementation of the API rollbacks and sure look like violations of its terms of service.
Business Insider wrote that HYP3R took “advantage of an Instagram security lapse” that allowed users who were not logged in to view posts from public location pages. Using that access, the company created geofenced locations ranging from stadiums to hotels, harvested “every public post tagged with that location on Instagram,” and stored them indefinitely. It also built a tool to download Instagram Stories, which are supposed to auto-delete after 24 hours, from those locations and similarly store them forever. (In both cases, only users who set their accounts to public would be affected.)
This allowed HYP3R to “build up detailed profiles of huge numbers of people’s movements, their habits, and the businesses they frequent over time,” Business Insider wrote, with sources telling the site that Instagram accounted for over 90 percent of what HYP3R has advertised as a database of “hundreds of millions of the highest value consumers in the world.” But the practice also seemed to be in clear violation of Instagram terms of service forbidding storing content longer than “necessary to provide your app’s service,” as well as a ban on reverse-engineering Instagram’s APIs. Facebook also forbids automated data collection without express written permission. On Tuesday, Instagram sent HYP3R a cease and desist and banned it from its platform. 
Business Insider noted that HYP3R never hid what it was doing, touting its API as allowing more access to data than through official Instagram tools and listing “support for Instagram Stories” in release notes for the iOS version of its app. But Facebook nonetheless included the company on a list of recommended, and supposedly vetted, marketing partners. Business Insider added it is “not clear” how Instagram failed to detect the mass data scraping, which seems to stretch the bounds of credulity given that HYP3R was openly carrying out the practices while holding recommended marketing partner status.
“HYP4R’s actions were not sanctioned and violate our policies,” a spokesperson for Instagram told CNBC in a statement. “As a result, we’ve removed them from our platform. We’ve also made a product change that should help prevent other companies from scraping public location pages in this way.”
HYP3R denies that it violated any Instagram policies.
“Hyp3r is, and has always been, a company that enables authentic, delightful marketing that is compliant with consumer privacy regulations and social network Terms of Services,” Hyp3r CEO Carlos Garcia told CNET. “We do not view any content or information that cannot be accessed publicly by everyone online.”

Even a Country Without Regular Internet Access Doesn't Trust Facebook


Perennially responsible company Facebook announced its intention to launch its own cryptocurrency, called Libra, less than two months ago. In that time a raft of governmental bodies have told Facebook in no uncertain terms to knock it the hell off, including:
  • The House Committee on Financial Services
  • The Senate Banking Committee
  • The Finance Minister of France
  • The Secretary of the U.S Treasury
  • The Chair of the U.S. Federal Reserve
Credit where it’s due: Few things seem to unify our governmental bodies in these divided times quite like telling Facebook to take a flying leap. Perhaps sensing the call towards unity, an international group of privacy regulators today released a joint statement (and a “non-exhaustive list” of questions) expressing their “shared concerns about the privacy risks posed by the Libra digital currency.”
The statement, posted by the UK’s Information Commissioners Office, notes that “many of us in the regulatory community have had to address previous episodes where Facebook’s handling of people’s information has not met the expectations of regulators, or their own users.” Certainly, Zuckbucks—which is also backed by Visa, MasterCard, and PayPal—will have an uphill battle in having to not only satisfy these international IT and banking institutions’ concerns, but also in regaining consumer trust in the often-scammy world of cryptocurrency, and in Facebook itself.
Just how depleted is that trust? The signatories of the statement include usual suspects, like FTC Commissioner Rohit Chopra; Giovanni Buttarelli, who oversees data protection for the EU; similarly-titled officials with the UK, Canada, and Australia. And Marguerite Ouedraogo Bonane, the president of the Commission for Information Technology and Civil Liberties for Burkina Faso.
Wait—Burkina Faso? Teeny, tiny West African country Burkina Faso? Burkina Faso that, unless public education in the U.S. has dramatically improved since my time in high school, almost no U.S.-based readers of this article could identify on a map? Yes, the very same. This is not to dunk on Burkina Faso or any other small Francophone nation, for that matter. What’s incredible is that a country where only around 3 percent of its population has regular internet access knows enough about Facebook to trust absolutely nothing it touches.
If you’d like to know where to direct your gratitude, Burkina Faso is here:


Facebook Will Attach Its Name to Instagram and WhatsApp, for Some Reason


Incredible. Despite a seemingly endless wave of ongoing public relations crises for Facebook, the social media giant appears prepared to foist its baggage onto two of its considerably less troubled subsidiaries—WhatsApp and Instagram—by attaching its name to their companies.
Citing sources familiar with the matter, the Information reported Thursday that Facebook is looking to rebrand the two apps by renaming them “Instagram from Facebook” and “WhatsApp from Facebook,” a format it already uses for its collaboration tool Workplace. The rebrand will be visible when users sign into the applications and on app store displays, according to the report, and presumably elsewhere. What could go wrong?!
The move comes as the result of Mark Zuckerberg’s frustration that his social media site hasn’t been given more props for the success of the companies, according to the Information, which sounds about right given what we know about Facebook’s big boy CEO. In a statement, Facebook spokeswoman Bertie Thomson told the Information that the company wants “to be clearer about the products and services that are part of Facebook.”
This is, of course, an almost comically bad decision on the part of Facebook and I imagine also terrible news for its so-called “family of apps.”
Instagram and WhatsApp—perhaps the former more so than the latter—have largely managed to sidestep some of the associative fallout from Facebook’s colossal mountain of bullshit, be it related to antitrust investigations, privacy concerns, security issues, or some as-yet-to-be-uncovered blunder.
And sure, a good percentage of consumers certainly realize that these apps are owned by a company that has yet to prove it can handle even a modicum of its own power. But adding such an explicit link between Facebook and Instagram and WhatsApp certainly won’t be doing either any favors.
Man, and to think there was a time when influencer sponcon seemed like an Instagram death knell.

How Did This Egg Get 'Bigger Than Before'?


This 5-minute craft is perhaps among the best things on Facebook today. But, like many things on Facebook, it’s not quite right. So, it’s time we learn some elementary school science.
Chicken eggs, the container and food inside of which chicks develop, are cased in a calcium carbonate shell, where calcium carbonate is a common mineral found in rocks and throughout the natural world. Vinegar, on the other hand, contains acetic acid. When the two mix, the carbonate takes up the hydrogen ions from the acetic acid and becomes carbonic acid, which breaks down into carbon dioxide and water.
If you actually perform this experiment, you’ll notice two things: First, it produces bubbles from the carbon dioxide, and two, it might take longer than one day for all of the egg’s shell to undergo the reaction. I think when we did this experiment in elementary school we waited two or three days.
What’s left from this reaction is just a membrane layer with no shell. You’ll notice that the egg hasn’t changed. Puncture the membrane and you’ll be left with a puddle of raw egg. But the egg will indeed be “bigger than before,” as water from the vinegar passes through the membrane into the egg, which has a lower water concentration. It’s called osmosis.
The video then instructs you to put the egg into maple syrup and then, after a day, it will be “bigger than before.” I assume whatever syrup they used was also high in water, which is why it got even bigger than before. Typically when you (or any other YouTube scientists) run this experiment, the syrup has less water, so water then leaves the shell-less egg via osmosis and you’re left with a shriveled-up mass.

Facebook’s Broken Ad Archive Is Working as Intended


Facebook’s cash cow is its ad business, and in the unconstrained pursuit of making that business as valuable as possible, the company has been accused of allowing advertisers to explicitly target white nationalists, or exclude immigrants and people of color from seeing housing opportunities, or dodge older workers when posting job listings. There was also that whole Russia thing. So Facebook built an ad archive—first just for political spending, then for everything—and folks, it does exactly what it was designed to do.
Employees at Mozilla, government workers with Office of the French Ambassador for Digital Affairs, and data journalists the New York Times reported today that, despite using what’s designed to be a more robust, non-public version of the ad library, Facebook’s Ad Archive does not meet even the barest needs of these researchers, with the product failing in ways that would be humiliating for a company the fraction of Facebook’s size.
Per the Times, Facebook put on the charade of making the data available, but create a process so difficult in extracting the information as to rending it useless:
With each search limited to 2,000 results, the researchers needed to do 1,900 searches to collect all the data
[Mozilla researchers managed] to download the information they needed on only two days [worth of ads] in a six-week span because of bugs and technical issues
With the relatively lousy internet speeds in the U.S., six weeks is long enough to download the entire Library of Congress—about 23 times over.
These are not hard problems to solve for a firm with the engineering pool and brain trust of Facebook. But the prevailing assumption is that Facebook is in any way interested in making the Ads Archive functional.
Jason Chuang, a Mozilla researcher, engaged in a lengthy back-and-forth with Facebook about a bug that crashed a search after 59 pages of results. Weeks later, a Facebook representative sent a message saying, “This is unfortunately a won’t fix for now.” [...] as recently as this week, the researchers said the library still crashed when they tried to check if the bug was fixed.
But let’s give the benefit of the doubt to the big, awful company that’s lied to us repeatedly: An archive of your core business function can’t be easy, and all software has issues early on. At least, as issues arose, the researchers could file bug reports.
On two other occasions, the researchers said Facebook blocked them from reporting fresh bugs. The reason? They had already reported too many.
 So it has some issues. And the Ad Archive isn’t exactly priority #1 at Facebook HQ. But when it returns results, researchers can trust that—
[Researchers] found that identical searches often returned different results
Oh.
So maybe its “searchability” was overstated, but at least it preserves this content and in doing so fulfills the basic function of an archive.
The French officials also found that Facebook sometimes removed ads without explanation. They said 31 percent of the ads in the French library were removed in the week before the European elections, including at least 11 that violated French electoral law.
Tools—functioning tools—to glean rich data about these ads already existed, built by Mozilla and ProPublica among others. Facebook made the intentional move to shut them out in January, claiming the code changes that locked out these tools were part of “a routine update and applied to ad blocking and ad scraping plug-ins, which can expose people’s information to bad actors in ways they did not expect.”

Here's Why Facebook's FTC Settlement Is a Joke


After some rumor-milling and informed reportage attributable to anonymous sources, Facebook has finally made public the contours of its deal with the Federal Trade Commission to end a probe into its handling of the Cambridge Analytica scandal. From what few specifics are shared by the company’s general council, Colin Stretch (who was supposed to have resigned last year), one thing is quite clear—they got off easy.
This is not just a statement on the $5 billion fine Facebook agreed to pay, which is quite a lot of money for regular people, or even fairly large businesses. However, $5 billion is approximately what Facebook makes in a single month. As a number of high-level Democrats including Senators Mark Warner and Ron Wyden, Representative David Cicilline, and FTC Commissioner Rohit Chopra point out though, the settlement does little to hold Facebook accountable, despite Stretch’s assertion that the “accountability required by this agreement surpasses current U.S. law.”
Yes, the government won vastly increased oversight into Facebook’s day-to-day, and the company’s board will be buffered by “an independent privacy committee of Facebook’s board of directors, ” according to the FTC, which itself “must be independent and will be appointed by an independent nominating committee.” Who makes up this committee, performs these appoints, or which individuals or group will act as Facebook’s new “independent privacy assessor” remain open questions.
As Bloomberg points out, however, none of these changes do much to alter Facebook’s core business model of hoovering up as much consumer data as possible. The enormous number of users on the platform and the richness of data collected about them mean Facebook is still a tinderbox. The FTC is essentially telling them that it’s okay to still play with matches, as long as they’re not caught lightning any.
What Facebook would like you to believe is that this agreement will engender consumers to a buck-stops-here attitude towards Mark Zuckerberg. During the 20 years the agreement is active, Stretch writes, “we will have quarterly certifications to verify that our privacy controls are working [...] the process stops at the desk of our CEO, who will sign his name to verify that we did what we said we would.” And as the FTC itself notes, this means Zuckerberg may have less wiggle room to avoid personal penalties down the line if future violations are discovered, though given the weak nature of this very settlement, it’s doubtful such a stipulation would draw any real blood from Zuckerberg’s pallid husk should that situation arise.
When asked if, consistent with Facebook’s stated values on transparency, these quarterly reports would be made public, a spokesperson declined to answer affirmatively or on-the-record, instead writing back that the company would begin proving updates in the coming months.
In what may be the most insulting paragraph of Stretch’s note, which Facebook published exactly when it knew news of former special counsel Robert Mueller’s testimony would drown out any other news item, he writes, “the agreement will require a fundamental shift in the way we approach our work [...] It will mark a sharper turn toward privacy, on a different scale than anything we’ve done in the past.”
I don’t know how Facebook approaches its work. What I do know is how it approaches its users—which is to incrementally, and more often after being caught doing something untoward—placate them with promises of fundamental changes in how it’s thinking about or implementing privacy; how it’s empowering us, the consumers, to control our privacy; and how privacy, privacy, privacy. Why would we trust Zuckerberg’s sign-off on quarterly data privacy assessments when he and his team have consistently published statements claiming Facebook will protect our privacy, which we can say in light of Cambridge Analytica turned out to be broadly untrue.
Here are just a few examples from Facebook’s Newsroom page:
  • March 30, 2019, written by Mark Zuckerberg: “People around the world have called for comprehensive privacy regulation in line with the European Union’s General Data Protection Regulation, and I agree.”
  • May 1, 2018: “we’re sharing some of the first steps we’re taking to better protect people’s privacy [...] We’re starting with a feature that addresses feedback we’ve heard consistently from people who use Facebook, privacy advocates and regulators: everyone should have more information and control over the data Facebook receives from other websites and apps that use our services.
  • April 17, 2018: “In recent weeks we’ve announced several steps to give people more control over their privacy and explain how we use data [..] We not only want to comply with the law, but also go beyond our obligations to build new and improved privacy experiences for everyone on Facebook
  • April 16, 2018: “As Mark said last week, we believe everyone deserves good privacy controls.”
  • April 4, 2018: “It’s important to show people in black and white how our products work – it’s one of the ways people can make informed decisions about their privacy.”
  • March 28, 2018: “we’re taking additional steps in the coming weeks to put people more in control of their privacy [...] We’ve worked with regulators, legislators and privacy experts on these tools and updates.”
  • November 27, 2017: “Protecting people’s privacy is central to how we’ve designed our ad system.”
  • May 22, 2014: “Over the next few weeks, we’ll start rolling out a new and expanded privacy checkup tool, which will take people through a few steps to review things like who they’re posting to, which apps they use, and the privacy of key pieces of information on their profile [...] Everything about how privacy works on Facebook remains the same.”
  • October 23, 2013: “On Facebook, you control who you share with [...] We take the safety of teens very seriously, so they will see an extra reminder before they can share publicly [...] they’ll see a reminder that the post can be seen by anyone, not just people they know, with an option to change the post’s privacy.”
  • January 28, 2013: “Last year, we launched improved privacy tools that let people see what they’ve shared, to see what photos have been tagged of them, and to be able to take action if there’s something they don’t like.”
  • December 21, 2012: “Along with the overall effort to continue bringing privacy controls up front, we’re adding in-context notices throughout Facebook.”
  • September 30, 2012: “We wanted to share some of the ways we have carefully designed our versions of the features with your privacy in mind”
  • November 29, 2011, written by Mark Zuckerberg: “With each new tool, we’ve added new privacy controls to ensure that you continue to have complete control over who sees everything you share [...] I’m committed to making Facebook the leader in transparency and control around privacy.”
  • May 26, 2010: “Facebook today responded to user comments and concerns about privacy by announcing it will introduce simpler and more powerful controls for sharing personal information [...] Starting with the changes announced today, the company will also prioritize ease-of-use in its privacy design.
  • December 9, 2009: “many users have expressed that the current set of privacy choices are confusing or overwhelming. In response, the Privacy Settings page has been completely redesigned with a goal of making the controls easy, intuitive and accessible.”
  • August 27, 2009: “Facebook today announced plans to further improve people’s control over their information and enable them to make more informed choices about their privacy.”
  • October 16, 2007: “When Mark and his co-founders built the Facebook website in 2004, privacy was a core tenet. This was evident early on by the segmented structure of networks and extensive privacy options [...] Facebook will continue to develop sophisticated safety technology and offer users extensive privacy controls so they can make their information available only to the people they choose.”
  • September 26, 2006: “Facebook has launched additional privacy controls with this expansion that allow every user to: • Block other users in specific networks from searching for his or her name. • Prevent people in those networks from messaging, poking and adding him or her as a friend. • Control whether his or her profile picture shows up in search results.”
  • September 8, 2006: “Facebook, the Internet’s leading social utility, today announced additional controls for News Feed and Mini-Feed in response to user feedback and to reaffirm its commitment to industry-leading privacy practices.”
“We have heard that words and apologies are not enough and that we need to show action,” Stretch wrote today, tapping another beloved vein of Facebook’s mine of excuses:
  • April 29, 2019: “Over the past two years, we have made significant improvements in how we monitor for and take action against abuse on our platform.”
  • December 18, 2018: “We know that we need to do more: to listen, look deeper and take action to respect fundamental rights.”
  • November 15, 2018: “The fact that victims typically have to report this content before we can take action can be upsetting for them.”
  • October 26, 2018: “Our elections war room has teams from across the company, including from threat intelligence, data science, software engineering, research, community operations and legal. These groups helped quickly identify, investigate and evaluate the problem, and then take action to stop it.”
  • October 22, 2018: “We use reports from our community and technology like machine learning and artificial intelligence to detect bad behavior and take action more quickly.”
  • September 13, 2018: “This will help us identify and take action against more types of misinformation, faster.”
  • August 28, 2018: “We want to make it more difficult for people to manipulate our platform in Myanmar and will continue to investigate and take action on this behavior.”
  • July 25, 2018: “We use reports from our community and technology like machine learning and artificial intelligence to detect bad behavior and take action more quickly.”
  • May 23, 2018: “We also take action against entire Pages and websites that repeatedly share false news, reducing their overall News Feed distribution.”
  • December 19, 2017: “We review reports and take action on abuse, like removing content, disabling accounts, and limiting certain features like commenting for people who have violated our Community Standards.”
  • June 15, 2017: “Because we don’t want terrorists to have a place anywhere in the family of Facebook apps, we have begun work on systems to enable us to take action against terrorist accounts across all our platforms, including WhatsApp and Instagram”
  • And March 23, 2012, in an action/privacy two-for-one: “We’ll take action to protect the privacy and security of our users, whether by engaging policymakers or, where appropriate, by initiating legal action, including by shutting down applications that abuse their privileges.”
No company has ever been as so profoundly full of shit as Facebook. While lawmakers are right to criticize this settlement as a slap on the wrist, ultimately no perfect deal could have been brokered: nothing short of the complete dissolution of Facebook would suffice to undo the deep breach of trust Zuckerberg has sown.

Report: FTC to Accuse Facebook of Using 2FA Numbers for Ads, Hiding Facial Recognition Settings


The Federal Trade Commission recently attracted scorn from congressional Democrats for fining world-spanning social network Facebook just $5 billion rather than tens of billions after the company failed to abide by the terms of a 2011 settlement with the agency on user privacy. But the FTC still has more dirt to dish out from its 16-month investigation into the company’s privacy practices, per a Tuesday report in the Washington Post.
Two sources “familiar with the matter” told the Post that the FTC is preparing accuse Facebook of collecting phone numbers from users under the pretext of security and then allowing advertisers to use that information for ad targeting, as well as trying to hide settings allowing users to opt out of its facial-recognition database.
The first matter relates to a 2018 study from Northeastern University and Princeton University researchers that found when users gave Facebook their phone numbers to set up two-factor authentication, which helps prevent unauthorized access to accounts, Facebook then used the info to pad out its trove of data usable for advertising purposes. In the second, the Post wrote, the FTC plans to accuse Facebook of providing insufficient information to approximately 30 million users “about their ability to turn off a [facial-recognition] tool that would identify and offer tag suggestions for photos.” Earlier this year, Consumer Reports reported that some Facebook users had their ability to turn off the feature relegated to a seemingly unrelated “Tag Suggestions” setting.
Both of the allegations are slated to be announced on Wednesday, the Post’s sources said, and will be included in a complaint tied to the $5 billion settlement between the FTC and Facebook. As the Post noted, this is a strong hint that given the choice between penalizing Facebook further for the “litany of privacy scandals” that have emerged since the FTC began its investigation and giving it “a clean slate going forward,” the agency has chosen the latter.
That settlement is expected to be officially announced on Wednesday; three sources told the Post that it will not require Facebook to admit any wrongdoing, while two told the paper the FTC did not actually bother to question CEO Mark Zuckerberg directly over the course of the investigation. Other reporting from the New York Times has indicated that Facebook will agree to stricter oversight of how it collects user data in the settlement, but “none of the conditions in the settlement will impose strict limitations on Facebook’s ability to collect and share data with third parties.”

Two Cops Fired Over Facebook Post Suggesting Rep. Ocasio-Cortez Should Be Assassinated


Two police officers in Gretna, Louisiana have been fired over a Facebook post that suggested Democratic Rep. Alexandria Ocasio-Cortez should be killed. One of the officers published the content and another police officer liked it.
Charlie Rispoli, a 14-year veteran of the Gretna police force, shared an article on Facebook last week that included fake quotes attributed to Rep. Ocasio-Cortez. The article, which was marked as satire, contained a fake quote from Ocasio-Cortez about members of the U.S. military being paid too much.
Rispoli, apparently believing that the article was real, wrote that Ocasio-Cortez was a “vile idiot,” and that she, “needs a round, and I don’t mean the kind she used to serve.” The congresswoman was previously a bartender before she was elected to Congress.
Angelo Variscom, another officer from the Gretna PD, reportedly liked the post according to local news outlets, and both officers were fired on Monday for violating the police department’s social media policy. Gretna Police Chief Arthur Lawson called the actions an “embarrassment” at a press conference yesterday.
Ocasio-Cortez is frequently the target of far right social media trolls and has been singled out by President Donald Trump during his neo-fascist rallies. Most recently, the president sat back while one of his crowds in North Carolina chanted “send her back” about another congresswoman, Ilhan Omar of Minnesota. Omar was born in Somalia but became a U.S. citizen as a child and the chant was clearly racist. Representatives Omar and Ocasio-Cortez are both part of an informal group known as “The Squad,” which also includes congresswomen Rashida Tlaib and Ayanna Pressley.
“This is Trump’s goal when he uses targeted language & threatens elected officials who don’t agree w/ his political agenda,” Ocasio-Cortez tweeted on Monday. “It’s authoritarian behavior. The President is sowing violence. He’s creating an environment where people can get hurt & he claims plausible deniability.”

Ocasio-Cortez was also singled out in a Facebook group comprised of current and former members of U.S. Customs and Border Protection (CBP) that was recently revealed by the news outlet ProPublica. One of the Facebook posts in the group showed a photoshopped image of President Trump participating in a violent sexual assault of Ocasio-Cortez. Last week, the congresswoman asked Acting Homeland Security Secretary Kevin McAleenan about the posts in an opening hearing.
“Those posts are unacceptable,” McAleenan said. “They are being investigated but I don’t think that it’s fair to apply them to the entire organization or that even the members of that group believed or supported those posts.”
But the the image wasn’t out of character for the group and it wasn’t comprised of just a small minority of CBP. According to a report from the Intercept, the Facebook group was so normalized within DHS culture that Border Patrol chief Carla Provost was even a member.
“Mr. Secretary, so you don’t think that having 10,000 officers in a violent, racist group sharing rape memes of members of Congress points to any concern of a dehumanized culture?” Ocasio-Cortez asked last Thursday.
McAleenan countered that the agency was “absolutely committed to the well-being of everyone that they interact with.”
But new reports come out every single day that contradict McAleenan’s claims. Just yesterday, NBC News published the story of a 17-year-old who described horrific conditions in one American run concentration camp at the U.S.-Mexico border. The Guatemalan teenager described children going hungry, border agents taunting kids, and the lights being on 24-hours per day.
“Sometimes, we would give one [hamburger] to the little ones. Because the little ones were the ones that wanted to eat more than others. At least, [the older kids could] stand the hunger a little more,” the teen said.
And on top of all that, American citizens are now being swept up into President Trump’s system of concentration camps. Dallas News published a report yesterday about an 18-year-old born in Dallas, Texas who was recently apprehended. The teen, Francisco Erwin Galicia, was detained at a CBP checkpoint on June 27, and now sits in an ICE facility despite the fact that his mother has provided his American birth certificate.
The Trump regime also published a new rule yesterday that will allow border agents to deport anyone who can’t prove that they’re an American citizen and have been in the U.S. for at least two years.
“The Trump admin’s new ‘expedited removal’ rule should terrify us all,” Congresswoman Diana DeGette of Colorado tweeted yesterday. “It will allow ICE officers to approach ANYBODY in the U.S. (w/o probable cause) & demand they prove they’re a citizen or have been in the US for 2 years. If they can’t, they’re deported. No trial, no hearing.”
The U.S. is now legitimately under proto-authoritarian rule and it’s going to get so much worse very quickly. And the rest of the world is going to see it play out on social media through the Twitter and Facebook posts of not just cops, but the President of the United States himself.