CLIPS: 17 March, 2022
Did someone forward you this email? Subscribe to receive our weekly newsletter in your inbox directly.
The Digital Fog of War: Meta Continues to Revise Content Moderation Policy in Ukraine
Last week, Reuters reported that internal emails from Meta announced rule changes to their content moderation policy, allowing users in some countries (including Russia, Ukraine, and Belarus) to call for violence against invading Russian soldiers. This change was almost immediately walked back after widespread criticism over a lack of consistency in Meta’s content moderation strategy. Meta clarified that their intention was to enable Ukrainians to express “their resistance and fury at the invading military forces.” Russia responded to Meta’s actions by opening a criminal case against the company and banning Instagram, shutting off access for 80 million users.
COVERAGE
Reuters, Meta narrows guidance to prohibit calls for death of a head of state
The Hill, Meta narrows content moderation policy, prohibits calls for death of head of state
The Guardian, Facebook and Instagram users not allowed to call for death of Putin
Protocol, Facebook: Users actually can't call for Putin assassination
CNN, Russia opens criminal case against Meta following temporary hate speech policy change
NPR, Russia plans to limit Instagram and could label Meta an extremist group
Newsweek, Meta Changes Stance on Violent Posts in Ukraine as Russia Bans Instagram
Democracy Now, Russia Shuts Down Instagram as Meta Allows Calls for Violence Against Russian Soldiers on Facebook
The Verge, Russia bans Instagram as promised, blocking access for 80 million users
Protocol, How war shaped Meta
RESPONSES
Kairos Action tweeted, “The fact that Meta gets to decide this without oversight is appalling. Zuckerberg and other leadership should not be the arbiters of what is violent speech.”
Casey Newton analyzed the events in Platformer. He says: “When users in one country are raining down bombs on your users in another, allowing the bombed users to say ‘die, invader scum’ can seem like the least you can do… But it raises questions that had no immediate answers. Questions like, how was this policy developed? What underlying principles will permit other violent threats to be made on Facebook and Instagram in the future?... In any case, the salience of those questions paled next to the median journalist shorthand for reporting this news, which was basically: ‘Facebook says murder actually good now, sometimes.’”
President of the NAACP’s Legal Defense Fund, Sherrilyn Ifill, tweeted, “I thought this was an Onion headline.”
CEO of Digital Content Next, Jason Kint, responded: “If you can’t keep up with labeling it or removing it then you may as well say you’re permitting it and score some political points. Yes?”
Director of Fight for the Future, Evan Greer, tweeted, “so many things to say about this. but one thing it exposes is that context is everything in content moderation, and often the bright line rules that seem logical at one moment suddenly don't make sense in a different context. this is something many miss or refuse to admit.”
In an op-ed for Tech Policy Press, Emerson T. Brooking (Senior Fellow at the DFR Lab) wrote: “If Meta had not changed its policy, it would be the job of Facebook and Instagram content moderation teams to remove any speech in which Ukrainians expressed fury against Russia or in which they celebrated the effectiveness of their own military in killing Russian invaders. Given the volume of such content, Meta would likely need to automate the task, using machine detection to identify, flag, and possibly remove Ukrainian speech that referenced the ongoing invasion.”
Editor of TPP Justin Hendrix tweeted, “It's important to remember that Facebook likely issues this kind of guidance in all parts of the world that are engaged in conflict. It must exercise editorial judgment in practically countless situations, even when its technological and human systems are overwhelmed.”
Ukraine's Defense Ministry Has Begun Using Clearview AI’s Facial Recognition Technology
Reuters this week reported that Ukraine’s defense ministry began using Clearview AI’s facial recognition technology last Saturday, after Clearview approached Ukraine offering their services to uncover Russian assailants, combat misinformation, and identify the dead. The founder of Clearview noted that the company had more than 2 billion images from VKontakte (a Russian social media service) out of a database of over 10 billion photos. While Clearview announced that they have not offered their technology to Russia, this sudden expansion of facial recognition during wartime is concerning to many.
Facial recognition is a controversial, if not harmful, technology. Clearview previously garnered criticism, which we covered on February 17, when they announced to investors that they “plan to store every human’s face in their database and expand their work with private companies.”
COVERAGE
Reuters, Ukraine has started using Clearview AI’s facial recognition during war
Business Today, Ukraine uses facial recognition software during war to uncover Russian assailants: Report
CNBC, Ukraine has started using Clearview AI’s facial recognition during war
Forbes, The Vulnerability of AI Systems May Explain Why Russia Isn’t Using Them Extensively in Ukraine
Engadget, Ukraine is reportedly using Clearview AI's facial recognition tech
Techcrunch, Ukraine’s Mykhailo Fedorov talks about corporate sanctions and running a government during wartime
RESPONSES
The Electronic Frontier Foundation (EFF) tweeted, “Facial recognition company Clearview AI claims to be the vanguard of digital free speech. Its subpoenas targeting activists and organizations concerned about the company shows it’s the opposite: a speech-chilling bully.”
Fight For The Future (FTFF) tweeted, “Bad actors never waste a crisis.”
Evan Greer tweeted, “I’ll bet you five bucks that Clearview’s creepy CEO emailed some guy in Ukraine asking if they wanted to try out his facial recognition software and the guy replied something like ‘idk bro maybe we’re kind of at war rn’ and then Clearview’s PR team sent Reuters this press release.”
Stephanie Hare, researcher and writer on technological ethics, tweeted, “Technology is not neutral. Clearview AI has been fined and ordered to delete the data of inhabitants in many European countries — it takes people’s face data without their knowledge or consent. It is used widely in civilian contexts throughout the US. Now a weapon of war.”
Ryan Mac, tech reporter for New York Times, tweeted, “Clearview AI, which never fails to insert itself in major news events, says its software is being used by Ukraine's defense ministry.”
He continued, tweeting, “It's possible Ukraine could be using Clearview, but I would hesitate at taking its CEO, who has a history of exaggerating (i.e. previously claiming the software is 100% accurate), at his word. It's also unclear *how* Ukraine is using it, even if there are potential use cases.”
Maderas, tech researcher and analyst, tweeted, “I understand why Ukraine would use Clearview AI's tech, but I'm suspicious of the company's motives. Clearview AI recently stated they're trying to get facial images of almost everyone on Earth into their facial recognition database; also, this doesn't align with their politics.”
Albert Cahn, founder of the Surveillance Technology Oversight Project, tweeted, “This is grotesque. As I told @JLDastinat @Reuters, this is going to be a deadly mistake.When police #FacialRecognition makes mistakes, we see innocent people wrongly arrested. When military face scans are wrong, civilians will get killed. #Ukraine.”
Silkie Carlo, director of Big Brother Watch, tweeted, “Clearview AI are at it again 1. If facial recognition fails (it will) people may be wrongly imprisoned or killed 2. The single source is Clearview. Without verification, @BBCNews should think hard about whether this report is news or free advertising.”
The State Privacy Legislation Battleground
While the question of whether Congress will take action on new privacy legislation is uncertain, state legislatures have become an important battleground for the issue. California, Virginia, and Colorado all passed comprehensive privacy laws within the last three years, and Washington, Iowa, and Utah are among the latest states considering new legislation.This week Axios reported on criticism from Consumer Reports about the laws under consideration — citing industry-funded groups as the source of language that would undermine the private right of action (for users to sue platforms) and default to an opt-out model for data collection. Over the past several years that states have begun implementing new privacy laws, advocates of the fundamental right to privacy (especially EFF) have called for additional provisions to improve state bills’ efficacy, including adding a strong data minimization requirement, broadening data privacy rights to cover targeted advertising, and ensuring non-discrimination in pricing against users employing privacy options.
COVERAGE
9 to 5 Mac, Apple-backed lobbying group accused of pushing for weak privacy legislation
New York Times, How California Is Building the Nation’s First Privacy Police
Bloomberg, Utah Privacy Bill Lacking Right to Sue May Pave GOP States’ Way
Lexology, Round 4! Utah to become fourth state to pass comprehensive U.S. privacy legislation
JD Supra, Utah To Become The Fourth State to Pass Privacy Legislation
RESPONSES
EFF tweeted, “When it comes to state data ‘privacy’ laws, the race to the bottom is accelerating.”
EFF has in the past opposed state privacy laws in California and Virginia, criticizing the opt-out model of data collection and calling for stronger enforcement provisions.
Last week, EFF also joined a coalition of advocacy organizations in sending a pair of letters to Utah’s and Iowa’s state houses, opposing a lack of meaningful protections for consumer data in pending privacy bills.
Signatories included Consumer Reports, the Electronic Privacy Information Center (EPIC), Fight for the Future, U.S. PIRG, and Ranking Digital Rights.
FFTF tweeted, “In true corporate fashion, now that regulation on Big Tech is inevitable, companies are swooping in to gut proposed state legislation, and in some cases write the bills itself to pass along to lawmakers.”
Accountable Tech tweeted, “Key point from @mxmahoney5 on Big Tech's influence in state houses: ‘What we’re seeing is this race to the bottom where industry is having a major influence on these bills.’”
NetChoice hosted a podcast with the Regulatory Transparency Project on the topic of state privacy bills, entitled “After California and Virginia, What’s Next?: Examining the State of State Data Privacy Legislation in 2022.”
NetChoice went on to tweet: “A uniform federal data-privacy law succeeds where the proposed antitrust changes and our currency state privacy model fail. It would actually address what Americans care about online and increase opportunities for competition.”
The ACLU tweeted, “Big Tech has begun seeding watered-down ‘privacy’ legislation in states with the goal of preempting greater protections, experts say.”
In a Twitter thread, head of tech policy with Consumer Reports Advocacy, Justin Brookman, tweeted, “Anything remotely stronger than the VA model attracts furious industry opposition. WA was considering an opt-out based bill with slightly stronger definitions, a global opt-out, and a bar on discriminating against consumers who exercise rights. Tech opposed. It died.”
Democrats Introduce The “Prohibiting Anticompetitive Mergers Act”
Sen. Elizabeth Warren (D-MA) and Rep. Mondaire Jones (D-NY) introduced the Prohibiting Anticompetitive Mergers Act on Wednesday. An amendment to the Clayton Act, the bill would strengthen antitrust agency enforcement, allowing the FTC and DOJ to reject deals without court orders, explicitly prohibit deals that are valued over $5 billion or that will lead to a post-acquisition market share greater than 50%, and retroactively break up deals that resulted in sizable market shares. The Senate is also considering the American Innovation and Choice Online Act, an antitrust bill introduced by Sens. Klobuchar (D-MN) and Grassley (R-IA) that augments existing antitrust laws by introducing explicit prohibitions on self-preferencing. The announcement comes as the FTC reviews Microsoft’s acquisition of Activision, and workers at Activision raise concerns that the merger could undermine their campaigns against union busting and wage suppression.
COVERAGE
The Hill, Democrats introduce bill to give FTC, DOJ power to block, break up mergers
Bloomberg Law, Warren Introduces Bill to Bar Mergers Worth $5 Billion Or More
RESPONSES
In a thread, Alex Harman, competition policy advocate for Public Citizen, tweeted, “This bill is a huge step towards a progressive antimonopoly vision of how we should look at mergers. Stop treating mergers as a social good! They are not. Companies do not have a fundamental right to merge, and it should be on them to prove that a merger is truly necessary.”
The Athena Coalition tweeted, “Led by @ewarren and @MondaireJones, w support from @SenSanders and @AOC, this bill could: Rein in Amazon’s unchecked power, Explicitly consider impact of mergers on communities of color, Protect workers from lower pay or worse conditions as result of mergers”
In a statement, Demand Progress said, “This piece of legislation is an important and essential step toward repairing the problems that large monopolies have created in our economy.”
Communications Director Maria Langholz tweeted, “Today, two faves @SenWarren & @RepMondaire, introduced the "Prohibiting Anticompetitive Mergers Act." If passed, this would be a huge step toward reining in the monopolistic behaviors of massive corps like @amazon & @Meta”
The American Economic Liberties Project tweeted, “In @washingtonpost, @sarahmillerdc makes it clear that corporate mergers trigger layoffs, threaten working people, and harms the entire economy. Thrilled to see @SenWarren& @RepMondaire taking a stand against these mergers w/ their new merger bill.”
Executive Director Sarah Miller tweeted, “The ongoing, record-shattering merger frenzy is supercharging the concentration of wealth and power. @SenWarren and @RepMondaire's new Prohibiting Anticompetitive Mergers Act takes direct aim at addressing the harms M&As pose to workers and consumers.”
Code CWA tweeted, “Activision Blizzard employees are facing rampant surveillance, intimidation & union-busting tactics in response to their efforts to change a culture of worker abuse & discrimination. @SenWarren's bill would ensure the merger’s impact on these workers is prioritized..”
In a statement, Stacey Mitchell, co-director at the Institute for Local Self-Reliance, said, “The outsized power of a few corporations – gained in no small part through anti-competitive mergers – threatens our democracy and has rendered our economy increasingly unequal.”
Destroy Your Algorithms: FTC Explores Innovative Enforcement for Data Privacy
Last week, the FTC settled a lawsuit against Weight Watchers (WW) International for collecting information from children without parental consent. The case is novel because under the settlement terms, WW agreed to destroy data that was collected in violation of the Children’s Online Privacy Protection Act (COPPA) as well as the algorithms they derived from it. Algorithmic disgorgement, as FTC officials have described the practice in scholarly publications, prevents companies from profiting off of data they unlawfully collected. The case may be a sign that the FTC will use it as an enforcement tool more frequently in the future. Privacy advocates have reacted positively to the WW case and the idea that it might become a trend.
COVERAGE
Protocol, The FTC’s new enforcement weapon spells death for algorithms
Digiday, Why the FTC is forcing tech firms to kill their algorithms along with ill-gotten data
IAPP, FTC’s use of algorithmic destruction in enforcement expected to grow
Protocol, The eve of 'algorithmic destruction'?
JDSupra, FTC COPPA Settlement Requires Deletion of Algorithms
Mondaq, United States: FTC Settles With Weight Loss Company Over Children's Data
New York Times, Weight Watchers App Gathered Data From Children, F.T.C. Says
RESPONSES
Kate Kaye, senior tech reporter at Protocol, tweeted, “For the third time, @FTC has forced a company to destroy algorithms built with data gathered deceptively.”
Kairos Action tweeted, “Good news that the @FTC has forced a company to destroy algorithms built with data gathered deceptively. ‘Algorithmic destruction’ will penalize companies and ideally help protect our privacy.”
Calli Karter, global privacy counsel at EPIC, tweeted, “Forcing companies to delete abusive algorithms goes further than just addressing the data and I LOVE IT.”
Edward Ongweso Jr, staff writer at VICE, tweeted, “GOOD: FTC has been forcing firms to delete their illegally obtained datasets, as well as algorithms trained with them. BAD: black box algorithms are finding increased use in the so-called gig economy as part of a never ending bid to cut labor costs (driver pay, conditions, etc).”
Ben Williamson, Chancellor’s Fellow at the University of Edinburgh’s Centre for Research in Digital Education, tweeted, “Big policy move from the FTC related to enforcing children's data privacy - forcing companies to "delete algorithmic systems built with ill-gotten data" because they constitute the intellectual property companies generate value from.”
OPEN TABS
Opinion | States Are Right to Rebel Against Big Tech (New York Times)
The Sex-Ad Law FOSTA Was a Mistake. Some Lawmakers Want to Fix It. (Reason)
Instagram’s promised parental controls arrive in the US (The Verge)
Inside Apple’s Decision to Blow Up the Digital Ads Business (The Information)
Tech Executives Threatened With Jail Time Under U.K. Law (The Information)
Farmers Unions, Right To Repair Coalition Files FTC Complaint Against John Deere (Techdirt)