Buffalo Shooting Sparks Content Moderation Debate & More
Social Media & Streaming Platforms Scrutinized Over Role In Facilitating Hate Speech Following Buffalo Shooting
On Saturday, a gunman in Buffalo, New York mounted a camera to his helmet and livestreamed to Twitch as he killed 10 people in a supermarket. Twitch acted swiftly to take down the stream (halting the broadcast within 2 minutes of the violence starting), but the video has been shared and viewed millions of times across social media platforms and hosted on smaller streaming platforms where lower investment in moderation tools has allowed reposts to spread largely unchecked.
Coverage has noted that platforms’ attempts to contain reposts (and combat hate speech related to the shooting) have been largely unsuccessful. Multiple sources noted that the gunman “chose Twitch” because he allegedly believed he could more easily stream to the platform and avoid removal. Likewise, reports showed that Facebook failed to remove links redirecting to reposts of the video on smaller sites (in one case generating 43,000 interactions). Meanwhile, similar direct links on Twitter to the video spread without much moderation over the weekend, with the company’s response only elicited further confusion (when they said it “may” remove some links disseminating the manifesto, before later clarifying all videos of the attack would be removed). However, the role of messaging service Discord in facilitating this violence was perhaps the most widely criticized, as the platform (which heavily relies on user reports to moderate content, and doesn’t actively monitor users’ servers) saw the gunman post repeatedly about his planned attack for months on a private server before executing it.
The gunman's attack was racially motivated, and he cited the "Great Replacement" conspiracy theory in his manifesto. (Note: recently the Center for Countering Digital Hate reported that platforms fail to take action on 90% of posts promoting racist conspiracy). Civil rights groups have pointed to conservative opposition to discussing race relations in classrooms as helping enable racist acts like the one in Buffalo. Similarly, progressive advocates have highlighted existing issues in platforms’ moderation strategies (such as removing English-language uploads of the video faster than those in other languages) as helping fuel online extremism. Content moderation experts have likewise been quick to point out that laws aimed at holding tech companies accountable (such as Texas’s HB20) might deter platforms from deplatforming extremists or removing harmful content in the future.
RESPONSES
Color of Change tweeted, “@Twitch enabled the shooter to live stream this white supremacist attack. A publicly accountable racial equity audit is necessary to ensure Twitch lives up to its commitment to Black creators and Black communities. Demand #TwitchDoBetter.”
President Rashad Robinson tweeted, “The right-wing doesn’t want to talk about race in schools, yet they want social media companies and cable platforms to be able to profit from lies, hate and disinformation about race… When honest conversations about racism are stifled and replaced with hateful disinformation, all while gun control remains practically nonexistent, that creates a formula for violent acts like what we say today in Buffalo.”
He later tweeted, “Twitch enabled the shooter to livestream this white supremacist attack. Comcast and Verizon continue to carry Fox News, where Tucker Carlson continues to spread racist lies about the great replacement theory. Twitter refuses to remove content celebrating Kyle Rittenhouse [amd] monetizes white nationalists like Richard Spencer... @ColorOfChange members have warned these companies — and our elected leaders — about the harm posed by their failures to act.”
The Center for Countering Digital Hate tweeted, “Buffalo is another painful example of the real-world cost of online hate and Big-Tech’s sheer indifference to tragedy.”
In an op-ed for the Guardian, CEO Imran Ahmed wrote: “Social media and online spaces are often where people meet, seek information and become radicalized through a rabbit-hole of lies, hate and misinformation. Those with fringe beliefs will be exposed to increasingly more radical content as a result of recommendation algorithms. The failure of social media giants to effectively tackle online hate and misinformation has real-world impacts. Words can kill.”
In a statement, Free Press wrote: “What if people in media, tech and politics stopped just ‘doing their jobs’ and committed to making a change? What if they stopped participating in the production, financing and spread of racist propaganda? What if they put the racists’ megaphones out of business? What if they changed the terms by which the social-media platforms operate?”
Accountable Tech tweeted, “Big Tech platforms including Facebook, Twitch, and Reddit have become a powerful megaphone for domestic terrorists to post their manifestos and document their crimes. ‘This spreads like a virus,’ said @GovKathyHochul.”
Sleeping Giants tweeted, “At Elon Musk’s @twitter, not only would the Buffalo shooter’s manifesto be allowed, but (correct us if we’re wrong) likely the livestream itself, as they both fall under our country’s free speech laws. Nothing from @elonmusk today, though.”
In a Twitter thread, Platform Regulation Director at Stanford’s Cyber Policy Center, Daphne Keller, wrote: “Can platforms operating under Texas’s new law take down the vile Buffalo video, or posts extolling it, or ‘replacement theory’ posts? NO ONE KNOWS… That’s part of the insanity of the Texas law. It’s like a litigation DDOS attack.”
She continued, “[F]or platforms deciding whether to be openly noncompliant, this prevents a stark, high-stakes internal test case. I can’t imagine they’ll choose to leave this garbage up. And maybe that precedent becomes a factor shaping their overall Texas strategy.”
Writer for Techdirt, Mike Masnick, tweeted, “[I]t's quite likely that Twitch removing this channel violates Texas' new social media content moderation law, which is now in effect. Just to give you a sense of how messed up the law is.”
He continued, “Just to drive this point home, in the run up to the law passing, an amendment was proposed making it clear that sites could still take down ‘domestic terrorist’ content. And the Republicans rejected it.”
Evelyn Douek, Fellow at Columbia University’s Knight First Amendment Institute, wrote in a Twitter thread: “It's not obvious to me that the Texas social media law would require platforms to carry the Buffalo shooting video… Removing posts that praised the video or the Great Replacement conspiracy theory would be another thing and viewpoint-based. But removing all posts that are graphic violent gun violence is a content-based category.”
Writing for Tech Policy Press, Dr. Welton Chang outlined questions about platform content moderation that the BUffalo shooting raises. From his piece: “Even if the largest social media and messaging platforms achieve perfect moderation– an ideal that is far from the current reality– the threat will persist, as white supremacy is deeply embedded in American society, in our politics, in our communities, and even in families. Just as we must take a systemic, cross-platform view of the content moderation problem that violent extremism poses, we must take a whole-of-society approach to confronting the hate that inspired the attack in Buffalo.”
Research Director at the Shorenstein Center on Media, Politics and Public Policy, Dr. Joan Donovan, tweeted, “Events like the one in Buffalo are particularly jarring. For every one of us who researches the socio-technical systems that give rise to a specific kind of amplified violence, we are still unable to prevent these tragedies. However, we can decide to give him no notoriety.”
Editor at Tech Policy Press, Justin Hendrix, tweeted, “I was thinking about the line between Jan 6 and Buffalo- the power and pervasiveness of white supremacy are sewn deeply into the fabric of this country. The signals elites send matter. Let's hope there is accountability.”
Advocacy Groups Call On Platforms To Combat Election Disinformation Ahead Of Midterms
As Americans vote in primary elections across the country ahead of the November 8 midterm elections, advocacy groups are urging digital platforms to share plans about how they’ll handle disinformation on their sites. They’ve highlighted the importance of social media platforms’ content moderation strategies for dealing with issues of disinformation, including combatting claims of election fraud, fact-checking electoral content, and more. This week, a coalition of 100+ advocacy groups sent a letter to CEOs of various social media companies, laying out several affirmative actions to combat election disinformation, including “...introducing friction to reduce the spread and amplification of disinformation, consistent enforcement of robust civic integrity policies; and greater transparency into business models that allow disinformation to spread.”
Whistleblower Frances Haugen, an ex-employee of Facebook and other tech giants, gave specific and insightful information on the inner-happenings of Facebook around the 2020 election. She asserted that the tech giant allowed misinformation to flourish in a tradeoff for increased growth, releasing documents that suggest they allow their platforms to amplify hate and misinformation in order to increase their profits. Platforms have made efforts since the last election cycle, including more frequent release of transparency reports and disclosure of data to researchers. However, they still lag behind in the prevention of disinformation, particularly targeting non-English speaking communities.
RESPONSES
The Center for Democracy and Technology wrote in a statement: “People must have access to accurate information about where & how to vote. CDT urges social media companies to combat online voter suppression & we’ve highlighted the crucial role that elections officials can play as well. Election disinformation is also a threat to democracies worldwide — action is needed not just in the U.S., but everywhere elections are happening.”
Public Knowledge released a statement on joining the coalition, also tweeting: “This November will be the first national election since the January 6 insurrection & we want to ensure that Big Tech companies will not allow election disinformation on their platforms… We must protect the integrity of the 2022 midterm elections and the public’s confidence in American democracy by blocking the spread of manipulative and false election information online.”
The Anti-Defamation League tweeted, “We've joined 120+ orgs in calling on @Meta, @Twitter, @Snapchat, @Google, @instagram, @YouTube & @tiktok_us to take action ahead of the '22 midterms to stop the spread of election disinformation, which leads to voter intimidation, harassment & suppression.”
Free Press tweeted, “We are proud to join 120+ organizations in urging social media companies to prevent the spread of election disinformation… To protect the integrity of the #2022Election and the public’s confidence in our democracy, social media companies must take immediate action.”
Director of Digital Justice and Civil Rights, Nora Benavidez, said in a statement: “[Election disinformation] is a systemic effort to discredit and disenfranchise certain voters… These social media companies must do better in the run-up to November’s midterms, starting with fixing their algorithms, protecting people equally, and increasing their transparency. Every day that passes without these essential fixes is another day disinformation takes hold and weakens democracies here and abroad.”
Erin Simpson, Director of Technology Policy at the Center for American Progress, said in a statement: “The role social media platforms play in enabling hate and disinformation is not inevitable. Platforms must take seriously their responsibility to protect more than just their bottom lines—we need decisive action to safeguard the 2022 midterm elections and the public’s confidence in American democracy.”
Maya Wiley, President and CEO of The Leadership Conference on Civil and Human Rights, said in a statement: “The relentless disinformation on social media platforms threatens civil rights, escalates hate and violence, undermines election integrity and the public’s confidence in American democracy, and imposes barriers to the ballot box, particularly for people from historically marginalized communities.”
Antitrust Advocates Say Kanter Shouldn’t Be Recused From Google Cases
Bloomberg reported last Tuesday that the Department of Justice antitrust head Jonathan Kanter had been temporarily barred from working on the agency’s monopoly investigations into Google, while the DOJ considers recusing him entirely. Kanter, who before joining the DOJ served as a lawyer representing clients like Yelp and Microsoft in antitrust lawsuits against Google, has been the target of opposition from industry and trade groups who question his ability to remain impartial in monopoly investigations given his previous work.
In response, a coalition of 28 progressive anti-monopoly groups (including Accountable Tech, the American Economic Liberties Project, and Fight for the Future) sent a letter to the DOJ defending Kanter’s record on antitrust. They also said that if the DOJ capitulates to Google’s calls to recuse Kanter, the agency would be “giving other powerful corporate actors incentive to engage in similar behavior.” In the past, the DOJ has authorized waivers for senior officials to participate in investigations even when their respective prior employers were represented parties (including with the head of the Civil Rights Division, Vanita Gupta, the Principal Associate Attorney General, Mathew Axelrod, and the Counselor to the Attorney General, Bryan Boynton).
RESPONSES
In a statement, Demand Progress wrote: “The DOJ is currently sidelining Assistant Attorney General Kanter, preventing him from working on the Google antitrust case while it considers Google's baseless recusal request… Meanwhile, with so many attorneys who left corporate clients to join the DOJ working without such scrutiny, we want to find out what is causing the delay."
Communications Director Maria Langholz tweeted, “Jonathan Kanter shouldn't have to recuse himself from matters regarding Google, and he CERTAINLY should not have to before a decision is made about his recusal waiver.”
AELP said in a statement: “Federal ethics law and regulations are very clear about the circumstances under which a recusal is necessary; not a single one of those conditions applies to Jonathan Kanter… Efforts to bar Kanter from this case are transparent attacks on a formidable attorney who has devoted his career to reinvigorating antitrust enforcement… They are also disrespectful to the President and Senate, which explicitly nominated and confirmed Jonathan Kanter to enforce the law in the context of Big Tech’s abuse of power.”
The Revolving Door Project wrote in a Twitter thread, “Kanter… does not possess financial conflicts of interest that threaten his impartiality[,] has not represented a party in the Google case[, and] has not ‘switched sides.’”
They continued, “Of course, Google is not actually concerned about ethics law. ‘To put it plainly, Google’s demand that Mr. Kanter recuse himself from scrutiny of the company is an effort by a corporate giant to bully regulators into submission.’”
Sen. Elizabeth Warren tweeted, “@POTUS picked Kanter as his top antitrust lawyer precisely because of his experience enforcing antitrust law. It's absurd to suggest that experience somehow disqualifies him. @TheJusticeDept must reject @Google's attempts to bully law enforcement. No company is above the law.”
CEO of Digital Content Next, Jason Kint, re-tweeted Warren, adding: “I find this baffling. Anyone who has even tried to find a dc antitrust/tech law firm - ahem - knows nearly all work for or want to work for Google. Yet Kanter who is one of top experts, taking consistent positions G may be an antitrust issue (duh), is conflicted? Baffling proof.”
CEO of Chamber of Progress, Adam Kovacevich, also re-tweeted Warren, saying: “Seems to me that @TheJusticeDept would *want* to recuse Kanter from Google cases, to deny Google the ability to use Kanter's past client work for Yelp, MSFT, News Corp as evidence of DOJ bias -- when the DOJ search distro case is finally litigated… Besides, DOJ's current antitrust suit against Google was launched under Trump, not Biden, and won't go to trial until 2023. Kanter's participation won't change DOJ staff attorneys' litigation strategy.”
Musk Stalls Twitter Buyout Over Concerns About Fake Accounts
Elon Musk tweeted Friday that he was pausing his offer to purchase Twitter “pending details supporting [the] calculation that spam/fake accounts do indeed represent less than 5% of users.” While Musk reaffirmed in a tweet hours after that he is “still committed to [the] acquisition,” Twitter’s stock has lost all of its gains since Musk first disclosed his 9.2% stake in April. Additionally, Musk noted that a lower buyout price is “not out of the question” should the frequency of bots on Twitter be above their estimated 5%. A decision by Musk to walk away from his offer could further damage the company and increase its vulnerability to another takeover.
Ironically, Musk’s Twitter’s account has been flagged by some programs as a bot account. Currently, the best way to target bots seems to be through Twitter’s own machine-learning, which incorporates information such as IP addresses, devices and all activity from the account in rendering its judgment.
RESPONSES
Jason Kint, CEO of Digital Content Next, tweeted, “Super interesting take because Musk and his army really are significantly less influential without access to Twitter as their weapon. He could go invest in Trump’s failing platform. Bye.”
Sleeping Giants tweeted, “All @elonmusk needed to do to prove that more than 5% of the users on this platform were bots is look below any tweet even remotely critical of him.”
Elizabeth Spiers, political commentary at MSNBC, tweeted, “I’m sure this has nothing at all to do w Tesla stock and crypto imploding.”
Tom Joseph, tech analyst and podcaster, tweeted, “Convinced Musk’s offer to buy Twitter was a ruse in order to have an excuse to dump billions of dollars worth of his Tesla stock at $900 a share, before it falls to a much lower level. Now he’s being psycho on purpose to make the Twitter purchase fall apart & he’ll keep his cash.”
OPEN TABS
Stanford Cyber Policy Center - “Big Speech” with Kate Klonick, Doug Melamed, Nate Persily
Corporate Lobbyists Secretly Waging War On Gigi Sohn (More Perfect Union)
Facebook quietly bankrolled small, grass-roots groups to fight its battles in Washington (Washington Post)
Lawmakers Urge FTC to Investigate ID.me and its Facial Recognition Tech (Vice)
How the Biden administration let right-wing attacks derail its disinformation efforts (Washington Post)
Why the 21st Century Antitrust Act is Critical for New York Workers (New Yorkers For A Fair Economy)
Thousands Call on Federal Trade Commission to Make Privacy and Civil-Rights Rule on Data Protection (Free Press)