FTC Signals Action Against Surveillance of Children & More
FTC Issues Policy Statement Limiting The Collection Of Children’s Data By EdTech Companies
During an Open Meeting on Thursday, the FTC voted unanimously to issue a policy statement enforcing federal limits on the collection and use of children’s data by education technology providers. The statement clarifies rules set forth in the Children's Online Privacy Protection Act (COPPA), which governs data collection from internet users under 13 years of age, and explains that the burden of “COPPA compliance is on [EdTech] businesses, not schools or parents.” It also introduces a set of restrictions on EdTech that prohibit them from forcing schools or parents to opt into data collection for their children to access these technologies, restricts how they can use the data, and how long they can retain it. The FTC vote comes more than two years after the Covid-19 pandemic first forced schools across the country to dramatically increase their reliance on education technologies, including video calls and test monitoring software.
In recent years, privacy advocates have criticized the surveillance data collection model employed by some EdTech companies, such as Proctorio and Zoom (which we covered back in April). Privacy advocates applauded the FTC’s action, saying it makes meaningful progress towards protecting childrens’ data and privacy online. However, Karl Bode of Techdirt, argues that the FTC has become a catch-all for priorities that can't advance elsewhere, and lacks the critical resources or authority to impose fines to help deter surveillance. Amidst a lack of federal direction on privacy legislation, many states (including California and Colorado) have enacted their own privacy legislation.
RESPONSES
FTC Chair Lina Khan said in a statement: “The ability of businesses to monetize user information has created a vast ecosystem of companies whose business model incentivizes the vast tracking and collection of personal data… Today’s statement underscores how the substantive protections of the COPPA rule ensure that children can do their schoolwork without having to surrender to commercial surveillance practices.”
In a statement, the American Economic Liberties Project said: “With today’s vote, the FTC unanimously decided to protect the privacy and security of children and families across the country as more schools embrace digital learning and online education.”
They went on to tweet, “From @Google collecting elementary students’ biometric data in Illinois to @Amazon recording the voices of millions of children, there is a dire need for a full throated enforcement of the Children’s Online Privacy Protection Act. Thank you, @FTC!”
The Center for Democracy and Technology said in a statement: “Limitations on data collection, use, and retention are essential to protect individuals from privacy harms and cybersecurity risks. We applaud the FTC for its work to strengthen enforcement of children’s privacy requirements in the context of education technology, and particularly thank the Commissioners who championed data minimization as a vital component of this work.”
Senior counsel Cody Venzke live-tweeted the FTC Open Meeting, adding: “The line between a ‘rule’ and a ‘policy statement’ is a fine one, but for the administrative law nerds, the significance is that a rule IS legally binding, while policy statements are not. That said, an agency must follow its own policy statement, procedures, and guidance.”
In a statement, Accountable Tech wrote: “By prioritizing enforcement of COPPA’s broad prohibition against conditioning a child’s participation in any activity on data collection, along with strict data use and retention prohibitions – and putting remedies like algorithmic disgorgement on the table – the FTC is finally threatening real consequences for those in the business of surveilling children. This is no replacement for Congress passing meaningful legislation to safeguard the best interests of kids and teens in the digital world, but it is a welcome step forward.”
They went on to tweet a quote from co-founder Jesse Lehrich: “While we desperately need new laws to protect young people online and curtail surveillance capitalism, it’s encouraging to see this FTC unleashing the full force of its existing authority to forge progress.”
Fight for the Future tweeted, “This is good. We've been sounding the alarm about eproctoring and the harms of surveillance-driven edutech for months.”
Academic Zephyr Teachout tweeted, “The FTC bringing down the hammer on ed tech demanding kids' data.”
Despite Diverse Opposition, States Pursue Efforts To Regulate Social Media Content
Writing for The Hill, journalist Rebecca Klar noted the “strange bedfellows” coalition of tech industry groups and digital rights organizations pushing back on content moderation bills in Texas and Florida. In recent weeks, these groups have released dozens of statements opposing these bills, saying they only serve to facilitate hate speech and the spread of disinformation online. Last week, for example, a group of 20+ industry and advocacy groups (including the Chamber of Progress, Anti-Defamation League, and NAACP) sent a brief to the Supreme Court, saying the TX law could flood web services and private communities with irrelevant content. In another example, a coalition of digital rights groups (including CDT, EFF, and the R Street Institute) sent a similar briefing to the SC, warning that the TX bill could result in functionally zero moderation on some platforms, and greatly increase hate speech on social media. However, despite groups’ efforts, state legislatures continue to consider these bills. Here are the most recent updates:
A court of appeals voted Monday to uphold an injunction against Florida’s proposed social media law (SB 2072) calling the bill, which would forbid social media platforms from deplatforming or moderating posts by political candidates, an unconstitutional breach of First Amendment rights. In the opinion, Circuit Judge Kevin Newsom said: “Put simply, with minor exceptions, the government can’t tell a private person or entity what to say or how to say it.” This stands in contrast to a recent ruling on Texas’ social media law (HB 20), where an appeals court struck down a preliminary injunction and allowed the bill, which prohibits platforms from restricting content based on “viewpoint,” to take effect.
Relatedly, on Tuesday a judge allowed Ohio Attorney General Dave Yost’s lawsuit against Google (alleging the platform is legally a “common carrier,” or provides a public utility a la phone companies) to proceed, requiring Google to stop preferencing its own content and “serve all comers and treat them equally.” It’s the first time a common carriage case against a tech platform has been approved, and could set a precedent for conservative politicians making similar cases in the future. These recent efforts by states to regulate the content moderation practices of major social media platforms beg the question: why should social media platforms be forced to carry certain types of speech that violate their community guidelines?
RESPONSES
Public Knowledge released a statement on the FL injunction, saying: “Free speech requires content moderation. We know from experience that unmoderated social media platforms quickly can turn into cesspools of spam and hate, and that trolls can drive ordinary users off of platforms. Platforms are far from infallible, and must be held to account. But they must remain free to use their editorial discretion to choose what speech to host and promote, and as the court found today, the Constitution requires this, as well.”
The Computer & Communications Industry Association wrote in a statement, “This ruling means platforms cannot be forced by the government to disseminate vile, abusive and extremist content under penalty of law. This is good news for internet users, the First Amendment and free speech in a democracy. When a digital service takes action against problematic content on its own site 一 whether extremism, Russian propaganda, or racism and abuse 一 it is exercising its own right to free expression.”
Director of FFTF, Evan Greer, tweeted, “I don't always cheer when courts rule in favor of tech companies, but when i do it's because Florida's ‘social media law’ was childishly absurd, incompatible with the First Amendment, and a threat to human rights and free expression online.”
Director of Columbia’s Knight First Amendment Institute, Jameel Jaffer, wrote in a Twitter thread: “Florida was arguing that its law doesn’t implicate the First Amendment at all, because (it said) the platforms don’t engage in protected expression when they moderate or curate user content. The platforms, by contrast, were arguing that the First Amendment means, essentially, that they can’t be regulated at all. The Eleventh Circuit rejected both of these theories. Which is good, because neither would serve our society very well.”
Writing in Platformer, Casey Newton drew comparisons on the negative impacts both social media bills would have for free speech, adding: “[I]t’s worth taking a moment to note just how quickly fringe ideas about platforms and speech have moved into the mainstream. During the net neutrality debate, as the Washington Post points out, Republicans rushed to defend the right of internet service providers to throttle or zero-rate anything they liked. The hypocrisy is galling. I’m hopeful that the Supreme Court will put the brakes on the Texas and Florida legislatures.”
In Techdirt, Mike Masnick wrote: “For the most part, this is a fantastic ruling, explaining clearly why content moderation is protected by the 1st Amendment. And, because I know that some supporters of Florida in our comments kept insisting that the lower court decision was only because it was a “liberal activist” judge, I’ll note that this ruling was written by Judge Kevin Newsom, who was appointed to the court by Donald Trump…”
In a Twitter thread, CDT wrote on the FL social media law: “Social media platforms should be able to remove hate speech, #disinformation, & other content that violates their rules, even if it’s posted by politicians or ‘journalistic enterprises’...[, in order] to respond to new events like #elections or public health crises… Platforms’ #contentmoderation is far from perfect, but laws that require them to publish content that violates their rules is not an improvement.”
They also highlighted their brief on the TX bill, with CEO Alexandra Reeve Givens adding in a Twitter thread: “[T]he law’s reckless approach is harmful for users and the public. Platforms would have to end content moderation practices that often benefit digital users for fear of lawsuits over whether their actions violate ‘viewpoint neutrality’... Allowing this law to take effect would unwind years of work by advocates around the world to get social media companies to take hate, harassment, and disinformation seriously.”
Writing in Tech Policy Press, Emma Llanso (director of CDT’s Free Expression Project) criticized HB 20, saying: “There’s no one-size-fits-all approach to content moderation that will work for every service, and getting closer to good outcomes requires a lot of thoughtful, nuanced work. Unfortunately, the issue of content moderation has also become highly politicized in the US, with lawmakers entrenched in partisan positions on how online services should or should not moderate user generated content.”
EFF wrote in a statement on the TX bill’s injunction being lifted: “[T]he Fifth Circuit’s ruling is wrong because what it defines as censorship are well-established practices designed to serve users’ interests. Users are best served when they can choose among social media platforms with different editorial policies. While content moderation at scale is difficult to get right, it blocks content that some users don’t want to see, like personal abuse and harassment, hateful speech, promotion of suicide and self-harm, and glorification of Nazi ideology.”
The Woodhull Freedom Foundation wrote in a statement: “Following the passage of FOSTA in 2018, [we] filed a challenge to the law — a battle that we continue to fight as sex workers and sexual communities are blocked, banned, de-platformed, and otherwise censored online. Yet here we are asking the Court to recognize that rather than offering freedom from censorship, HB20 is an unprecedented attack on our freedom to exist and express ourselves safely online.”
D.C. Attorney General Sues Mark Zuckerberg Over Cambridge Analytica Data Breach
Washington, D.C. Attorney General Karl Racine filed a lawsuit against Meta (Facebook) CEO Mark Zuckerberg on Monday, accusing him of being personally responsible for the massive Cambridge Analytica data breach. According to the Attorney General, this suit comes as a result of an existing investigation in the aftermath of the explosive Cambridge Analytica scandal. In 2018, a whistleblower revealed the political consulting group working for the Trump campaign, Cambridge Analytica, improperly obtained information scraped from Facebook profiles to create targeted political ads based on personality traits. Following an investigation, the FTC fined Facebook $5 billion in 2019 – the largest fine ever levied by the FTC – for its role in the scandal. It’s not clear from the reporting why the DC AG’s office has waited so long to file suit given that the FTC completed its investigation several years ago. However, DC AG Racine holds that Zuckerberg is responsible because he holds the largest number of shares, has the final say over anything that happens, and is also responsible for the events that led to the scandal.
RESPONSES:
Accountable Tech tweeted, “Mark Zuckerberg is the founder, CEO, chairman, and majority shareholder of Facebook. He must be held accountable for his company's failures and harms to humanity. #MakeMarkListen”
Sleeping Giants tweeted, “Amazing that @meta’s defense of holding Zuckerberg accountable for Cambridge Analytica is that it happened a few years ago.”
AELP tweeted, “4/ Mr. Zuckerberg cannot hide behind the veil of Meta, and @AGKarlRacine is rightly using the D.C. Consumer Protection Procedure Act to hold Mr. Zuckerberg to account. Attorneys general across the United States should follow his lead.”
They also released a statement, saying, “Enforcers must hold corporations and powerful individuals accountable when they violate the law”
First Union In The Major Gaming Industry Amidst Another Potentially Massive Merger
Vice broke the story of the first major video game union to be formed in the United States. Raven Software QA, a quality assurance team of 350 working for Blizzard Activision, has worked on popular games such as Call of Duty and has helped to identify glitches and bugs. The company’s workers argue that they’re underpaid, understaffed, forced to work extra on crunch time (before a game’s patch or release), and have faced an atmosphere of discrimination. Activision Blizzard released a statement, saying, “We believe that an important decision that will impact the entire Raven Software studio of roughly 350 people should not be made by 19 Raven employees,” referring to the 19-3 vote in favor of joining the Game Workers Alliance. As big tech continues to expand, they are acquiring companies that are unionizing and it puts into question if the advancement of the unionization agenda will continue to cultivate throughout these companies and the industry itself.
A number of acquisitions in the gaming industry are active. Microsoft is expected to close its acquisition for Activision Blizzard for $68 billion in 2023. There are also reports circulating that Apple is looking to acquire Electronic Arts. Amazon and Disney are also reportedly interested in Electronic Arts. Should a deal go through, it would represent another colossal video game merger that could undoubtedly shake up the competition in the gaming industry. While EA has refused to comment, several sources have asserted that they are in the process of pursuing a sale.
RESPONSES
Patrick Klepek, reporter for Waypoint (a subsidiary of Vice), tweeted, “19 votes in favor, 3 votes against— Raven Software's QA team has a union. The video game industry has its first major video game union, a huge win for organized labor in an industry prone to crunch and exploitation.”
Evan Greer retweeted this.
Klepek continued, tweeting, “Should be clear, there are unions at major game studios internationally—this is new and unique to the U.S.”
Much Ado About Ads: Senator Mike Lee Unveils New Digital Advertising Bill
Led by Sen. Mike Lee (R-UT) with support from Sens Amy Klobuchar (D-MN) and Richard Blumenthal (D-CT), a bipartisan group of lawmakers introduced The Competition and Transparency in Digital Advertising Act (CTDA), legislation that would prohibit companies who earn $20 billion in annual ad revenue from owning multiple parts of the digital advertising marketplaces. Ad marketplaces are made up of 3 distinct segments – ad buying (e.g., advertisers who want to place their ads), selling (e.g., publishers, websites with ad space to sell), and exchanges (e.g., electronic auctions for trading ad space) – that participate often through intermediaries to buy, sell, and auction display ad space in real-time. Under the proposed legislation, a company like Google, the leading player in the ad market that owns the largest platforms in all three segments while simultaneously participating in them, would be forced to divest its ownership interests across each part of the digital advertising chain. Other dominant players like Meta and Amazon would likely have to sell off portions of their digital advertising business as well. Rep. Ken Buck (R-CO) unveiled a companion bill H.R 7839 in the House.
This is not the first time Google and Facebook have been targeted by officials for their duopolistic hold over the digital ad sector. In July 2020, the CEOs of Amazon, Apple, Google, and Facebook appeared before the House Judiciary subcommittee on antitrust to answer questions about their business practices. Google CEO Sundar Pichai faced pointed questions from Rep. Jayapal (D-WA), quoted in a Wired article, that illustrate the concerns about big tech’s control of the buyer, seller, and exchange sides of their advertising markets:
“The problem is that Google controls all of these entities,” she said. “So it’s running the marketplace, it’s acting on the buy side, and it’s acting on the sell side at the same time, which is a major conflict of interest. It allows you to set rates very low as a buyer of ad space for newspapers, depriving them of their ad revenue, and then also to sell high to small businesses who are very dependent on advertising on your platform. It sounds a bit like the stock market. Except, unlike the stock market, there’s no regulation on your ad exchange market.”
In January, we covered allegations coming from a coalition of state attorneys general claiming Facebook and Google engaged in illegal anticompetitive behavior and colluded to deflate prices for platforms in online advertising auctions that led to price hikes for buyers.
As the calls for anti-competition regulation intensify, some anti-monopoly groups like the American Economic Liberties Project support the increasing momentum for antitrust legislation while free-market critics warn that these bills would decrease consumer choice. Mirroring what could be accomplished with self-preferencing bills up for consideration right now, the CTDA would crackdown on digital ad chain monopolies to block tech firms from self-prioritizing their own products and services and steering the market in their interests. Scholars, including antitrust expert Dina Srinivasan, have written extensively about competition problems in ad marketplaces.
RESPONSES
Matt Stoller, director of Research at AELP, said in a statement, “By breaking up Google’s third-party ad tech business and requiring advertising intermediaries to disclose the prices and quality of the advertising they are buying and selling, this bill will finally allow honest price discovery in markets for online advertising.”
Stoller also tweeted, “This is an excellent bill by @SenMikeLee to address adtech consolidation. It would break apart part of Google's ad business, and have implications for Amazon's ad business as well. @SenAmyKlobuchar @RepKenBuck @SenBlumenthal are on board as well.”
Ranking Digital Rights tweeted, “This year, Google came under scrutiny for its ‘duopoly’ with @Meta over online advertising. Civil society and shareholders pressed the company to address its use of cookies to track users, ultimately pushing it to scrap its new ad targeting system.”
In a statement, the Software & Information Industry Association (SIIA) said, “This bill continues a trend seen in other recently proposed legislation, where legislators seek to use the blunt instrument of antitrust law to punish a handful of large corporations, focusing only on a company’s size, not its conduct.”
Dave Lauer, fair market advocate at Irvine AI, tweeted, “I believe strongly in identifying and breaking up concentrated corporate power. I believe strongly that open competition leads to the best outcomes for everyone (except for monopolies/oligopolies). Those arguing against competition are always the ones extracting rents.”
In a follow-up, Lauer tweeted, “Regardless, this is the right bill for the right time. It's got bipartisan support, and rightly so. Google and Facebook operate monopolies with no transparency or disclosure, and nearly complete control over buying, selling and matching. It's time to start to break them up.”
FTC Fines Twitter $150 Million For Using User Security Data For Targeted Ads Without Consent
Twitter will pay $150 million and can no longer profit from deceptively using account security data – phone numbers and email addresses – for targeting advertising purposes under a new settlement with the Federal Trade Commission. In a statement on Wednesday, the FTC announced that the fine is the result of Twitter violating a prior settlement with the FTC in 2011, known as a consent order, that banned the company from misleading consumers about its privacy and security practices. The FTC alleges that between 2014 and 2019, Twitter prompted users to provide phone numbers and email addresses to help them secure their Twitter accounts (for account verification and multi-factor authentication), but Twitter failed to inform users that the information provided would also be used for targeted advertising.
The FTC has increasingly used past consent orders as the basis for penalizing large tech companies’ subsequent privacy and security practices. In 2019, for example, the FTC fined Google and Facebook $170 million and $5 billion respectively for violating earlier consent orders by failing to obtain consent from users before collecting and using personal information. The penalties for Twitter arrive also as digital privacy activists express their concern over increasingly extractive data practices by platforms as well as alarm over the usage of targeted advertising in the election cycle.
RESPONSES
Eva Galperin, Director of Cybersecurity at the Electronic Frontier Foundation, tweeted, “This was such a sleazy thing for Twitter to do and I only wish this had cost them more than $150 million, which will be shrugged off as the cost of doing business.”
Elon Musk tweeted, “If Twitter was not truthful here, what else is not true? This is very concerning news.”
In response to a twitter user who suggested Twitter switch to a subscription model, Musk replied, “Exactly.”
Matthew Green, professor of cryptography at Johns Hopkins, tweeted, “Facebook did this too and it was such obvious crap. Most likely Facebook thought the fine was a good deal, but Twitter at this point is close to running a GoFundMe and needed this money.”
Leah Nylen, Bloomberg reporter, tweeted, “Twitter's use of the phone numbers also means it had falsely claimed it was complying with the EU-US and Swiss-U.S. Privacy Shield Frameworks, according to the complaint.”
Quoting the FTC’s statement, Jesse Lehrich, co-founder of Accountable Tech, tweeted, “while Twitter represented to users that it collected their telephone numbers & email addresses to secure their accounts, Twitter failed to disclose that it also used user contact information to aid advertisers in reaching their preferred audiences"
OPEN TABS
How the Biden administration let right-wing attacks derail its disinformation efforts (Washington Post)
California lawmakers push ahead with sweeping children's online privacy bill (CBS)
Lummis-Gillibrand crypto bill promises regulatory clarity (Protocol)
Very, Very Little Of ‘Content Moderation’ Has Anything To Do With Politics (TechDirt)
Lummis-Gillibrand crypto bill promises regulatory clarity - Protocol (Protocol)
ACLU And Partners Tell Federal Reserve: On Digital Cash, Anonymity Is Not Negotiable (ACLU)