First-of-its-Kind Letter Calls for Ban on Private and Corporate Use of Facial Recognition

Groups call facial recognition “too dangerous to exist,” say it must be abolished

FOR IMMEDIATE RELEASE: Wednesday, April 14
Contact: Caitlin Seeley George, caitlin@fightforthefuture.org

More than 20 civil and human rights organizations are expanding the fight against facial recognition and calling for a ban not only on government and law enforcement use of the technology, but also private and corporate use.

The letter, which highlights recent abuses by corporations including Uber Eats, Amazon, and Apple, states that this technology threatens to suppress workers’ rights to organize, makes frontline workers susceptible to harassment and exploitation, puts personal biometric data in danger, and exacerbates existing biases.

The letter says that “In a world where private companies are already collecting our data, analyzing it, and using it to manipulate us to make a profit, we can’t afford to naively believe that private entities can be trusted with our biometric information. A technology that is inherently unjust, that has the potential to exponentially expand and automate discrimination and human rights violations, and that contributes to an ever growing and inescapable surveillance state is too dangerous to exist.”

While the call to ban law enforcement and government use of facial recognition has grown, and lawmakers have banned this use in many cities (and introduced a federal bill), Portland, OR is the only city to ban private use of facial recognition thus far. The organizations point to the Portland legislation as a template for other lawmakers to address the concerns with private and corporate use of the technology, and call on “local, state, and federal elected officials, as well as corporate leaders, to ban the use of facial recognition surveillance by private entities.”

“There is zero reason to believe that corporations can use this technology responsibly, especially at a time when these companies are already collecting our data and using it to manipulate us for profit,” said Caitlin Seeley George (she/her), Director of Campaigns and Operations at Fight for the Future. “This technology is inherently discriminatory and dangerous, no amount of regulation can address that. In order to protect people in workplaces, stores, restaurants, hospitals, transit and beyond, we must ban it.”

“Opt-in consent based regulatory frameworks will not address these harms,” added Evan Greer (she/her), Deputy Director at Fight for the Future. “If employees have to agree to being under constant facial recognition surveillance in order to have a job, that’s not meaningful consent. If a patient has to agree to have their biometric information collected in order to receive care at a hospital, that’s not really consent. Even more innocuous uses, like getting your face scanned to buy a burrito come with significant risks. The vast majority of people have no idea what the dangers of this technology are, and putting the onus on them fails to recognize power imbalances.”

“Facial recognition technology poses serious threats to personal freedom. Letting this tool of authoritarian control spread throughout the private sector has serious implications for worker organizing rights and heightens the risk of catastrophic biometric data breaches,”  said Tracy Rosenberg, Advocacy Director at Oakland Privacy. “You can’t replace your face, The troubled record of facial recognition technology in identifying darker skinned people and youth poses severe dangers for people too often criminalized. Facial recognition technology should be put back in the bottle. We don’t need it and the dangers can’t be regulated away.”

“Facial recognition being prone to racial bias is not its only problem. If it were 100% accurate, it would be horrifying. If you’re tracked wherever you go, your movements are laid bare for any company or government to exploit. Facial recognition deployments strip away your whole right to be let alone, in the name of more efficient advertising and policing. It’s not worth it,” said Alex Marthews, National Chair of Restore The Fourth.

“Corporate facial recognition fuels racist policing of Black, brown, and immigrant communities,” said Aly Panjwani, Policy & Advocacy Manager at the Surveillance Technology Oversight Project. “Facial recognition is biased, broken, and dangerous to the livelihood of working-class people. This technology exists to monitor, exploit, and incarcerate and must be banned.”

“The companies that develop and sell facial recognition technology need to recognize and confront its inherent dangers – and they need to stop it now,” said Michael Connor, Executive Director of Open MIC, a nonprofit which has organized corporate shareholders to oppose the spread of facial recognition. Connor noted that a shareholder proposal at Amazon highlighting the human rights risks of the company’s facial recognition product won more than 40 percent of the independent shareholder vote at Amazon’s 2020 annual meeting, with yet another vote scheduled at this year’s upcoming 2021 annual meeting.  “Investors increasingly understand the dangers of facial recognition,” Connor said. “Managements and boards of directors should take note.”

“Facial recognition is one of the most dangerous forms of surveillance ever invented. We know that its use — both by private and government entities — puts Black and brown communities already targeted by state violence at an even higher risk of arrest and incarceration. And we know that it’s already being used to target & silence protesters, deport migrant families, and control and surveil workers by their employers at Amazon warehouses and beyond. It’s clear to us that the dangers this technology poses can’t be “reformed” or “regulated” and we cannot trust tech companies — who are making enormous profits off of this tech — with the surveillance tools they already have. We must ban corporate & private use of facial recognition and fight for a surveillance-free future for all of us,“ added Laura Barrios, Campaign Manager, MPower Change.

"Corporate use of facial recognition will serve as an end-run around bans on government use of the technology and is a profound danger to the public in its own right. Face surveillance is too powerful for any entity to use because it enables widespread and surreptitious tracking of individuals on the back of cheap and omnipresent devices, cameras. The harms of facial recognition, both when it errs and when it is accurate, fall predominantly upon people of color, low-income individuals, and migrants. The use of this technology threatens to turn everyone into a suspect. FRT also permits unprecedented surveillance of workers, both on the job and off the clock. The only responsible step is for corporations to stop using facial recognition,” said Jeramie Scott, Senior Counsel and Director of the Surveillance Project at the Electronic Privacy Information Center.

“Let’s face it, the new gold standard for corporate power is private data, and owning your face is about as personal as it gets. Furthermore, corporations using facial recognition technology further exacerbates the criminalization of Black and Brown people,” said Matt Nelson, Executive Director of Presente.org, the nation’s largest Latinx digital organizing group. "Profiting from a surveillance state is an unethical, dangerous racket and has no place in a future democracy that works for all of us.”

The release of this letter comes after a handful of recent cases that highlight the growing problem of facial recognition being used by corporations: the hack of more than 150,000 Verkada security cameras that include facial recognition software and are used in offices, gyms, hospitals, jails, schools, police stations, and more; Disney’s announcement that it will be testing facial recognition at the entrance to the Magic Kingdom, and the incidences with Uber Eats, Apple, and Amazon previously mentioned.

Organizations signed onto the letter include Action Center on Race and The Economy (ACRE), American-Arab Anti-Discrimination Committee, Cryptoharlem, Daily Kos, Data for Black Lives, Demand Progress, Electronic Privacy Information Center (EPIC), Fight for the Future, Greenpeace USA, Massachusetts Jobs with Justice, MediaJustice, Mijente, MPower Change, Muslim Justice League, Oakland Privacy, Open MIC (Open Media & Information Companies Initiative), Presente.org, Privacy PDX, Public Citizen, RAICES, Restore the Fourth,  RootsAction.org, Secure Justice, S.T.O.P. (Surveillance Technology Oversight Project), and United We Dream.

##############

6 notes

Open Letter: banning government use of facial recognition surveillance is not enough, we must ban corporate and private use as well

Wired has reported that Uber Eats drivers in the UK are being fired because of the company’s faulty facial identification software, which requires drivers to submit selfies to confirm their identity. When the technology isn’t able to match photos of the drivers with their accounts, drivers get booted off the system and are unable to work, and thus unable to pay their bills. This isn’t the first time this has happened—in 2019 a Black Uber driver in the U.S. sued the company for its discriminatory facial recognition.

Cases like this are becoming increasingly prevalent: Amazon delivery drivers now have to  agree to AI surveillance, including facial identification, or else lose their job, and Apple recently banned facial recognition on employees visiting manufacturing sites, but failed to apply this ban to also protect factory workers. This level of surveillance creates many problems, including suppressing worker efforts to organize and engage in collective action. In each of these cases frontline and marginalized workers are being targeted and their safety and rights are being undermined in favor of corporate surveillance, control, and power.

These cases clearly show how private use of facial recognition by corporations, institutions and even individuals poses just as much of a threat to marginalized communities as government use. Corporations are already using facial recognition on workers in hiring, to replace traditional timecards, and to monitor workers’ movements and “productivity”—all of which particularly harm frontline workers and make them susceptible to harassment, exploitation, and put their personal information at risk.

Using biometric surveillance technology in retail stores, hospitals, and healthcare settings, at concerts and sporting events, or in restaurants and bars will exacerbate existing discrimination. In the same way that Black and brown communities are targeted by police, companies can target certain communities with their facial recognition surveillance. A store could use a publicly available mugshot database to ban everyone with a criminal record from the store, which would disproportionately harm Black and brown people who are over-policed and over-represented in these databases. The impact of this would be compounded by the fact that facial recognition is notoriously bad at correctly identifying Black and brown faces. Overall this feeds a system of mass criminalization, where Black and brown people are treated as guilty everywhere they go.

Biometric surveillance is more like lead paint or nuclear weapons than firearms or alcohol. The severity and scale of harm that facial recognition technology can cause requires more than a regulatory framework. The vast majority of uses of this technology, whether by governments, private individuals, or institutions, should be banned. Facial recognition surveillance is inherently discriminatory. It cannot be reformed or regulated; it should be abolished.

In 2020, Portland, OR, passed a groundbreaking ban on private use of facial recognition, which smartly bans use in places of public accommodation as defined by the Americans with Disabilities Act. We believe this ordinance should be used as a template for more city, state, and federal legislation that bans private and corporate use of facial recognition surveillance. 

In a world where private companies are already collecting our data, analyzing it, and using it to manipulate us to make a profit, we can’t afford to naively believe that private entities can be trusted with our biometric information. A technology that is inherently unjust, that has the potential to exponentially expand and automate discrimination and human rights violations, and that contributes to an ever growing and inescapable surveillance state is too dangerous to exist.

We call on all local, state, and federal elected officials, as well as corporate leaders, to ban the use of facial recognition surveillance by private entities. The dangers of facial recognition far outweigh any potential benefits, which is why banning both government and private use of facial recognition is the only way to keep everyone safe.

Signed,

Action Center on Race and The Economy (ACRE)
American-Arab Anti-Discrimination Committee
Cryptoharlem
Daily Kos
Data for Black Lives
Demand Progress
Electronic Privacy Information Center (EPIC)
Fight for the Future
Greenpeace USA
Massachusetts Jobs with Justice
MediaJustice
Mijente
MPower Change
Muslim Justice League
Oakland Privacy
Open MIC (Open Media & Information Companies Initiative)
Presente
Privacy PDX
Public Citizen
RAICES
Restore the Fourth
RootsAction.org
Secure Justice
S.T.O.P. (Surveillance Technology Oversight Project)
United We Dream

38 notes

For the first time, public libraries are barred from offering at least five Oscar-nominated films

Films from Netflix, Hulu, and Amazon Studios are nominated for Best Picture, Best Original Screenplay, as well as Lead Actress and Actor—but several will not be available to those who can’t get fast internet or afford a subscription.

Press contact: press@fightforthefuture.org
For immediate release Tuesday April 13, 2021

image

Image by analogicus from Pixabay features rows of gleaming gold Oscars trophies.

2020 is the first year that streaming-only works are eligible for Academy Awards, due to a pandemic exception. In their availability assessment, Fight for the Future and Library Futures considered titles nominated for Best Picture, Best Original Screenplay, as well as Lead Actress and Actor—prominent awards whose trends forecast the future of the film industry. This lack of public library availability among these major categories sets a dangerous precedent in the age of streaming giants—not only that they are growing as major arbiters of culture, but now as arbiters of access as well.

In a new blog post, Library Futures and Fight for the Future are calling on Netflix, Hulu and Amazon Studios to make their content available to public libraries on the same terms as theatrical releases. The streaming giants are setting a dangerous new precedent for the most important films of the year—that important cultural works and knowledge are only for people with disposable income.

Among the works nominated for the most prominent award categories, Netflix’s Ma Rainey’s Black Bottom, Pieces of a Woman, and The Trial of the Chicago Seven; Amazon Studios’ Sound of Metal, and Hulu’s The United States vs. Billie Holiday are unavailable for public libraries to purchase, preorder, or even license for their collections.

Since VHS tapes democratized access to films, many library users have enjoyed the opportunity to view important cultural works by borrowing them. But in the digital age, Big Tech is prioritizing profit and data surveillance over libraries and the diverse, often low-income people who rely on them.

“When so many rural, urban, and low-income people lack affordable high speed internet access or disposable income, tech giants are exacerbating inequality by locking important knowledge and art behind a paywall,” said Lia Holland (she/they) Campaigns and Communications Director at Fight for the Future. “This inequity is particularly staggering when you consider the content of the unavailable films themselves—the themes of protest, persecution, racial equity, and gender equity that are essential to our times. Do they truly believe that the most compelling stories to inspire change should be only for people in upper class communities?”

“During the last major financial crisis in 2008, users flocked to the library to gain access to an enormous collection of content, including recent films. These collections supported patrons from every income level and background, and circulation shot up all over the country,” said Jennie Rose Halperin (she/her) Executive Director at Library Futures. “Now, paying for access to all of the Academy Award nominated films on three separate streaming platforms would cost almost $400 per year – and that’s assuming you can afford internet access at all. As streaming has moved from distribution to content production, streaming services have moved to prohibit libraries and under resourced communities from purchasing films in a digital or physical format.”

New Data on Law Enforcement Use of Clearview Added to Map Tracking Use of Facial Recognition Across the U.S.

image

FOR IMMEDIATE RELEASE: April 8, 2021
Contact: Caitlin Seeley George, caitlin@fightforthefuture.org, 303-594-4321

With more data on where facial recognition is used, the urgency for legislation banning government and law enforcement use of facial recognition heightens.

Earlier this week, BuzzFeed News broke the story that employees at law enforcement agencies across the country have run thousands of facial recognition searches using the controversial Clearview AI app. The investigative research included data from a confidential source that shows nearly 2,000 agencies that have used the application in some fashion.

This data has now been added to the Ban Facial Recognition Map: https://www.banfacialrecognition.com/map

This interactive map, created by digital rights group Fight for the Future, shows where facial recognition surveillance is happening, how it’s spreading, and where there are efforts to rein it in. It is also a resource for people to take action and send messages to their lawmakers, calling on them to ban law enforcement and government use of the technology.

With the addition of this data on Clearview use, all 50 states, except Vermont—the only state that has banned law enforcement use of facial recognition, plus DC and the Virgin Islands, are represented on the map.

The states with the most taxpayer-funded entities that have used Clearview are:

  • California* (140)
  • Florida (116)
  • Alabama (103)
  • New Jersey (101)
  • Texas (100)
  • Illinois (99)
  • Georgia (72)
  • North Carolina (64)
  • Pennsylvania* (63)
  • New York (61)

*States with local bans on law enforcement use of facial recognition

“Although it’s terrifying to add nearly 2,000 more places to our map where we know facial recognition is threatening communities, this data highlights what we already know: that law enforcement is using facial recognition in ways that fundamentally threaten any semblance of human rights, due process, and exacerbate existing discrimination. The only way to stop this is to ban it,” said Caitlin Seeley George, director of campaigns and operations at Fight for the Future. “Since officers often use Clearview without their department’s knowledge or consent, this is the first time we’ve seen how widespread this use is. It’s clear that no amount of regulations can protect us when officers are already using Clearview in secret. The only solution is a ban.”

The good news is that cities, counties, and states are taking action to combat law enforcement use of facial recognition and are banning the technology. In the past few months Minneapolis, MN, Madison, WI, and New Orleans, LA have banned facial recognition. Last year Senators Markey and Merkley and Representatives Jayapal and Pressley introduced federal legislation to ban law enforcement and government use of facial recognition, and they are expected to reintroduce the legislation again this year.

13 notes

Musicians and digital rights activists launch campaign targeting Spotify over surveillance patent

image

Fight for the Future has teamed up with the Union of Musicians and Allied Workers (UMAW) to launch a campaign demanding Spotify abandon a patent it filed to use artificial intelligence voice recognition software to target music and ads. The campaign is accompanied by a music video for the song “Surveillance Capitalism” from Evan Greer, with proceeds donated to the #JusticeAt Spotify campaign

Digital rights group Fight for the Future has teamed up with the Union of Musicians and Allied Workers (UMAW) to launch StopSpotifySurveillance.org. The campaign calls on Spotify to drop reported plans to use artificial intelligence and voice recognition software to spy on listeners’ conversations, conducting emotional surveillance and manipulation to target music and advertising. The campaign comes after human rights group Access Now sent a letter to Spotify demanding they abandon the surveillance patent last week.

The campaign is accompanied by a dystopian new video for the song “Surveillance Capitalism” from trans femme indie-punk artist Evan Greer (she/her), which blends layers of melodic indie punk guitars with audio samples from anti-surveillance activists and icons like Chelsea Manning, Jacinta Gonzalez of Mijente, Malkia Cyril of Media Justice, and author Ursula K Leguin.

The video release provides a sneak peak at the song off Greer’s new album Spotify is Surveillance, which drops this Friday, April 9th on Get Better Records and Don Giovanni Records. Greer plans to donate all artist proceeds from the song to the Union of Musicians and Allied Workers to support their existing #JusticeAtSpotify campaign, calling for better pay, an end to Payola, and more transparency.

Greer says, “The fact that Spotify filed a patent for this type of emotional surveillance and manipulation is beyond chilling. It’s not enough for them to say that they have no plans to use this technology right now, they should publicly commit to never conducting this type of surveillance on music listeners. Surveillance capitalism as a business model is fundamentally incompatible with basic human rights and democracy, regardless of whether it’s being employed by Facebook, Amazon, or Spotify. The song and video highlight the fact that the Internet has the potential to profoundly transform our society for the better, abolishing false scarcity and enabling universal access to human knowledge and creativity, while ensuring marginalized and independent artists and creators are fairly compensated for our labor. But if we allow a small handful of companies to dominate the web and the music industry with a parasitic business model based on surveillance and exploitation, we’re headed for the opposite: a dystopian future where algorithms decide what we see and hear based on profit, rather than artistry.”

UMAW and Fight for the Future are encouraging artists and concerned listeners to sign the petition at StopSpotifySurveillance.org, and are calling for the company to publicly commit to not using voice recognition surveillance on the platform.

31 notes

University advocates e-proctoring alternatives, but struggles to remove e-proctoring option from McGraw-Hill Connect platform

image

“Faculty need support and using e-proctoring as a way out of that is not a good pedagogical solution for anyone.”

For Immediate Release April 1, 2021
Press contact: press@fightforthefuture.org

Today, staff at the University of Michigan-Dearborn who support faculty development and digital education, released a paper in To Improve the Academy (TIA) titled “What happens when you close the door on remote proctoring? Moving towards authentic assessments with a people-centered approach.” But even as the campus bucks the trend of using eproctoring apps to monitor students during online assessments (many of which don’t actually eliminate cheating), it can’t keep them off campus entirely due to McGraw-Hill’s partnership with embattled eproctoring app Proctorio and McGraw-Hill’s failure so far to remove it for UM Dearborn users.

“Administrators and teaching and learning staff at University of Michigan-Dearborn made the decision to avoid adopting remote proctoring technologies and to instead invest in instructional design staff and faculty development programming to help faculty transition to authentic assessments,” the paper’s abstract states. “Remote proctoring services require access to technology that not all students are not guaranteed to have, can constitute an invasion of privacy for students, and can discriminate against students of color and disabled students.”

But little did UM-Dearborn staff know that even as they were speaking out against the harms of eproctoring, McGraw-Hill was bringing Proctorio to all the campuses that use their McGraw-Hill Connect textbook platform. When confronted on February 5th by staff at UMich-Dearborn and asked by upper administration to turn off the e-proctoring option, McGraw-Hill said they would remove eproctoring from the materials it provides to the institution within two weeks. Since the initial request, staff at UM-Dearborn have reached out to McGraw-Hill on several occasions. On March 22nd McGraw Hill responded with an apology and assumed fault for the delay. They said that the removal should be completed shortly but as of the date of this release the removal has yet to occur.

“They said it would be two weeks, it’s been two months” said Autumm Caines (she/her), Instructional Designer at The Hub for Teaching and Learning Resources at UM-Dearborn, and a coauthor on the TIA paper, “It is profoundly disrespectful of the pedagogy that we advocate for. We put out this paper that focuses on rejecting remote proctoring and embracing people-centered supports for authentic assessments—we cannot make it more clear that we are looking for real solutions, not snake oil profiteering, on our campus.”

“McGraw-Hill’s failure here is outrageous, especially as they have been aware of the controversy surrounding Proctorio since at least December,” said Lia Holland (she/they) Campaigns and Communications Director at Fight for the Future, a digital rights organization leading the charge against discriminatory and invasive eproctoring software. “Rolling out a controversial technology without the knowledge, consent, or oversight of customers is in incredibly poor form for one of the world’s largest textbook companies. Failing to make a simple modification to turn off that same controversial tech makes you wonder what is actually going on behind the scenes in McGraw-Hill’s deal with Proctorio. Do Proctorio’s false representations of its customer list stem from this very McGraw-Hill feature? Is McGraw-Hill refusing to turn off this option for its customers in order to help Proctorio falsify a larger customer list than it actually has?”

“Faculty need support and using eproctoring as a way out of that is not a good pedagogical solution for anyone,” said Sarah Silverman (she/her), the lead author of the TIA paper. “We want to support instructors in assessing their students, and there is a lot more involved in that than simply preventing “cheating.” I encourage instructors to develop assessments that engage students in an authentic task to show how they can apply their knowledge. A great side effect of this type of assessment is that it is not conducive to cheating, making eproctoring unnecessary.”

Outcry from students and human rights experts, as well as investigations into the foundations of the technology itself, is compelling universities to turn their backs on eproctoring. Those seeking to evolve their remote learning practices can consult the TIA paper, in which “lessons learned and recommendations are provided for other educational developers or institutions who want to resist remote proctoring on their Campuses.”

The full text of “What happens when you close the door on remote proctoring? Moving towards authentic assessments with a people-centered approach” is available at https://quod.lib.umich.edu/t/tia/17063888.0039.308?view=text;rgn=main. Its coauthors are available for comment by reaching out to Autumm at acaines@umich.edu or Sarah at sarahsil@umich.edu.

Are millions of K-12 students about to be surveilled & analyzed with the same proctoring tech universities are abandoning?

The Biden Administration is denying pandemic waivers for testing children as young as 8. Companies with controversial e-proctoring features hold the contracts.

image

For Immediate Release Wednesday March 31, 2021
Press Contact: press@fightforthefuture.org, (508) 474-5248

K-12 schools across the country are on the verge of holding remote-proctored state assessment tests, putting millions of children on camera and potentially subjecting them to the same snake oil facial recognition & biometric AI features universities are abandoning in the wake of backlash over racial bias, ableism, discrimination, privacy, and efficacy concerns.

In 2020, federal K-12 student testing requirements for states were waived due to the pandemic, but in 2021 education technology company lobbyists have caught the scent of pandemic recovery money, and are advocating for remote tests that educators insist will be useless at best, and harmful at worst.

Members of major state testing consortium SBAC, including California, have a contract with opaque educational technology vendor Cambium, a company that advertises controversial artificial intelligence and scoring algorithms for their tests. Members of major state testing consortium PARCC, including New Jersey, have a contract with Pearson to administer their tests. Pearson is partnered with embattled eproctoring company ProctorU. At least one institution in Texas uses Proctorio on K-12 students, collecting footage that at least 400 people may have access to. It is unclear whether Texas will use Proctorio to eproctor the upcoming federally-mandated exams.

“We need to recognize this moment in student privacy, surveillance, and data collection for what it is—an epic data heist leading to the use of predictive algorithms that could negatively impact students’ future opportunities,” said Roxana Marachi (she/her), associate professor of education at San José State University. “The push to eproctor these tests is based on a false premise—that the existence of data, no matter how flawed, false, or incomplete, matters more than the students themselves. The converging harms of e-proctoring, AI, and other data collection technologies in K-12 education are invisible to most school leaders, parents, educators, and students. Our privacy laws and practices have not kept up with the rapid influx of invasive educational technologies and it’s the height of hypocrisy for testing proponents to suggest that administering these tests will in any way serve the interests of underserved youth.”

Testing windows are already open in some states as of last week. Others are awaiting word on whether they will receive the testing waiver that Ohio was recently denied, creating a situation in which many hours of testing for individual students must be rolled out and scored in a matter of weeks. Some full-time teachers have just been provided 400 pages of test prep training for tests starting May 3. Others have yet to be provided any information at all.

Also unclear amidst this hurried testing rollout is how writing and other assessments will be scored—if, as with College Board’s Accuplacer test from this school year, they will be graded with the sorts of Automated Scoring AIs that caused outrage in Britain last year.

“The scope of this edutech cash grab, on the backs of children as young as eight years old, is truly astounding,” said Lia Holland (she/they) Campaigns and Communications Director at Fight for the Future, a group pushing back against child surveillance and eproctoring. “Just like at the university level, surveillance companies are swooping in to sell inequitable products that may include racist add-ons like facial recognition, and ableist anti-cheating algorithms that track so-called abnormal behavior like eye movement. On top of it all, these remote tests require stable internet connections that many kids just don’t have and constitute a major privacy violation. The only thing these tests will accurately assess is how many of our tax dollars surveillance & spyware companies will co-opt to harm the privacy and equitable education of vulnerable students. The normalization of child surveillance technologies in education must end immediately.”

Over 500 education researchers and scholars have co-signed a letter urging Education Secretary Cardona to grant states waivers to halt this year’s federally mandated standardized tests, noting that the tests will exacerbate inequality and produce invalid data. This letter comes following another endorsed by over 200 education deans and leaders that emphasizes how “the shift to online education widens long-standing inequities and injustices in education.”

Disinformation and human rights experts: gutting Section 230 will help Facebook and harm marginalized communities

FOR IMMEDIATE RELEASE: Thursday, March 25, 2021
Contact: press@fightforthefuture.org, (508) 474-5248 

Today, Fight for the Future held a livestream event with Dr Joan Donovan of Shorenstein as well as experts from the ACLU, Wikimedia, Access Now, Woodhull Freedom Foundation, and Reframe Health and Justice, who explained why gutting Section 230 won’t stop the spread of harmful content and disinformation online. 

The event came just ahead of a hearing in the House Energy & Commerce Committee where lawmakers questioned the CEOs of Facebook, Google, and Twitter. Too often, reporting around these hearings focuses only on the statements of Big Tech CEOs and lawmakers, ignoring voices from civil society groups and smaller web platforms who have a crucial perspective to share. Earlier this year we also issued a letter signed by 70+ racial justice, civil liberties, LGBTQ+, and human rights groups opposing repeal or gutting of Section 230 and urging lawmakers to pass the SAFE SEX Worker Study act to examine the public health impact of SESTA/FOSTA before making further changes to Section 230.

During the hearing, Facebook CEO Mark Zuckerberg expressed support for changing Section 230. That’s because such changes will help Facebook and harm human rights, without addressing harms like disinformation. Here are some quotes from participants in our event:

Evan Greer (she/her), Director of Fight for the Future, said: “Of course Facebook wants to see changes to Section 230. Because they know it will simply serve to solidify their monopoly power and crush competition from smaller and more decentralized platforms. Facebook can afford the armies of lawyers and lobbyists that will be needed to navigate a world where Section 230 is gutted or weakened. And they’ve shown repeatedly that they don’t care about the impact that Section 230 changes could have on the human rights or freedom of expression of marginalized people – they are happy to sanitize your newsfeed and suppress content en masse in order to avoid liability or respond to public criticism. Zuckerberg’s support for changes to Section 230 is about maintaining Facebook’s dominance and monopoly control, nothing more. Instead of helping Facebook by gutting Section 230, lawmakers should take actual steps to address the harms of Big Tech, like passing strong Federal data privacy legislation, enforcing antitrust laws, and targeting harmful business practices like microtargeting and nontransparent algorithmic manipulation.”

Dr. Joan Donovan (she/they) of the Shorenstein Center: "The internet still exists: Platforms are built on top of it, Facebook is a product, Facebook is not the internet. Speech is like the cassette tape that goes in the boombox of the internet. The problem is messy and the solution is going to come in many different ways, there is no Section 230 magic bullet. One thing we can do that is not 230-related: We can pump up the volume on timely, local, relevant content. We can create within timelines and newsfeeds, room for local journalism, room for things that are not trying to trigger emotional responses, information that is not often shared because it is not sexy but people do want and don’t always get in their feeds. What this looks like is asking for public interest obligations for social media and this doesn’t require us to go in 230 necessarily and do anything significant. It’s really important that we all come together - universities, civil societies, the law community - and come at this with an orientation that we don’t want to destroy the benefits that the internet has brought to us, but at the same time we want to put community safety at the center of design.”

Kate Ruane (she/her) of the ACLU: “When it comes to disinformation specifically, amending Section 230 is unlikely to truly address the problem. One of the issues we face is that disinformation has no clear definition, and to the extent that it simply means ‘speech that is false,’ it will often be protected by the constitution, for better or for worse … It’s unclear to me what Section 230 changes to address disinformation will actually do to address the problems other than encouraging problems to continue to deploy ever stricter censorship regimes, which we know disproportionately silence people of color, the LGBTQ community, Muslims, other marginalized groups, and people who express dissenting views. But that doesn’t mean we should throw up our hands when it comes to disinformation. There is a lot we can do … meaningful privacy restrictions can also be tremendously helpful. If we limit the data these companies can collect and then empower users to limit the ways that companies can use that data, it will be harder and harder for disinformation campaigns to target people in the first place … I think we need to be talking about those things, rather than changing Section 230.”

Sherwin Siy (he/him) of the Wikimedia Foundation: "The Wikimedia Foundation hosts projects like Wikipedia–we provide the servers, and work on the software and interfaces for it–but Wikipedia is written by tens of thousands of users, who change what’s on the site several times each second. Section 230 means that, should one of those edits defame someone or cause trouble, neither the Foundation nor any other editor gets blamed for that one person’s action.  It also means that the communities on these projects have the ability to create and enforce their own standards for how content gets moderated–and for the most part, that content moderation deals with how encyclopedic something is, not whether or not it’s illegal or abusive. Section 230 isn’t just about what is and isn’t decent–it’s about making sure a website, and the community on it, can set standards around things like not accepting original research, or self-promotion, or even creating standards around biographical information that respect article subjects’ rights that go beyond what’s required in the law. Having standards like these helps communities strive together to make Wikipedia as accurate and reliable as it can be, and Section 230 is a necessary part of making that happen.”

Lawrence (Larry) Walters (he/him), General Counsel for the Woodhull Freedom Foundation and attorney with Walters Law Group: “Requiring tech companies to moderate more user content through proposed Section 230 reform will not stop disinformation online, but will lead to greater censorship of constitutionally protected speech. Big Tech wants content regulation so they can claim they are simply following the law when shutting down disfavored speakers. This approach helps no one but a few large online platforms. The first attempt to tinker with Section 230, through FOSTA, was an unmitigated disaster resulting in censorship of protected expression and increased danger to sex workers. Congress should learn the hard lesson taught by FOSTA by fostering a free Internet by rejecting any further weakening of Section 230 immunity.”

“Repealing Section 230 will not solve the disinformation crisis,” said Jennifer Brody (she/her), U.S. Advocacy Manager at Access Now. “Disinformation wouldn’t be effective without coercive micro-targeting, and micro-targeting wouldn’t exist without invasive data harvesting practices. If we are serious about stopping the dangerous fire hose of lies online, we cannot overlook the importance of passing a rights-respecting federal data protection law in the United States.”

“As a community who has experienced being the target of legislative reforms and the unintended consequences, sex workers, and people associated with the sex trade have born the brunt of what happens when reforms to 230 do not consider marginalized communities, or create quickly drafted, budget-neutral bills,” said Kate D’Adamo, Partner at Reframe Health and Justice and long-time sex workers’ rights advocate. “While this conversation is centered on disinformation, it is using the same flawed starting point - to assume that 230 is the problem and that additional liability is the solution.What we need is not simply additional avenues for civil suits. What we need is transparency with how platforms are making decisions, accountability and redress for those who are constantly kicked off for exercising basic survival, and a serious investment in anti-violence efforts.”

 ###

19 notes

20+ civil rights groups demand CNET, Consumer Reports, and other review sites stop recommending Amazon’s racist Ring cameras

image

IMMEDIATE RELEASE: Wednesday, March 24th
CONTACT: Evan Greer, press@fightforthefuture.org, 978-852-6457

Today, more than twenty racial justice, worker advocacy, privacy, and civil rights organizations released a joint letter calling on the editors of CNET, Consumer Reports, Digital Trends, TechRadar, Tom’s Guide, and Wirecutter to rescind their recommendations of Amazon Ring cameras given the threats Ring technology poses to Black and brown communities. 

See the letter here: https://www.fightforthefuture.org/news/2021-03-24-joint-letter-from-20-racial-justice-and-civil/

“Putting Black lives in danger is part of Amazon Ring’s business model. The tech giant weaponizes racist, fear-mongering culture by using racially-coded language and dog whistles to promote Ring products and partnerships,” the letter’s signatories write. “Amazon’s private surveillance network fuels the criminalization of Black and brown people by amplifying existing racism in our communities and policing––further subjecting communities of color to repressive police violence and feeding a system of mass incarceration.”

The letter goes on to discuss Amazon Ring representatives helping Los Angeles Police Department (LAPD) detectives obtain footage of Black Lives Matters protesters.

“It’s not surprising Amazon helped police use their surveillance dragnet to track down the very protesters fighting to dismantle the racist, repressive, militarized law enforcement system Amazon profits from. Roughly half of the police departments partnered with Amazon “are responsible for over a third of fatal police encounters nationwide”—a shocking statistic given that only around 7% of our nation’s police departments had a Ring partnership at the time. In one specific instance, a woman shared footage of an unidentified man on her porch on Amazon Ring’s Neighbors app, which is patrolled by police. The man was later shot by sheriff’s deputies.”  

CNET, Consumer Reports, Digital Trends, TechRadar, Tom’s Guide, and Wirecutter all posted statements declaring solidarity with Black Lives Matter during protests last summer. However, they have failed to back up their statements with action as they continue to recommend racist Ring products. By awarding Amazon Ring cameras “best in their category” or only enacting temporary suspensions, these outlets are complicit in the violence police wage against Black and brown communities. Despite the outlet’s claims that reviews are neutral, there is no neutrality when it comes to racism.

The signing organizations include: Fight for the Future, Action Center on Race and the Economy (ACRE), Athena Coalition, Backbone Campaign, Color of Change, Demand Progress Education Fund, Institute for Local Self-Reliance, Jobs With Justice, Kairos, LAANE, Media Alliance, MediaJustice, Mijente, MPower Change, Oakland Privacy, Open MIC (Open Media & Information Companies Initiative), Partnership for Working Families, Presente.org, Public Citizen, S.T.O.P. - The Surveillance Technology Oversight Project, Secure Justice, and Threshold.

Leaders from the organizations participating in the campaign issued the following statements, and are available for comment upon request:

The following can be attributed to Myaisha Hayes, Campaign Strategies Director at MediaJustice, (pronouns: she/her): “The only recommendation Tech Review Editors should be making to consumers is to not buy Amazon Ring. These outlets can’t seriously declare that “Black Lives Matter” while advertising surveillance products that harm us. Those of us familiar with the history of Black activism understand that our right to organize and protest has always been under constant attack. Just a few years ago, the FBI labeled Black activists as “Black Identity Extremists” and warned all local law enforcement agencies that Black protesters posed a significant threat to our public safety. This shameful history and practice of undermining Black led Movements is great business for corporations like Amazon that provide the state with racist surveillance tools to track down and cage our loved ones.  As things stand now, millions of households have been deputized by Amazon Ring to expand and digitize the state’s racist policing—and tech review editors are perpetuating this oppression.”

The following can be attributed to Jessica Quiason, Deputy Research Director at Action Center on Race and The Economy (ACRE), (pronouns: she/her): “Ring is just one component in an endless arsenal of privately-owned, profit-driven tech that expands on state systems of surveillance and policing of Black and Brown people. These cameras invite the police to have their eyes and ears on our very doorsteps while also creating a profit for Amazon which is more and more invested in expanding the powers and reach of the State. We cannot surveil and police our way to safety. Communities keep communities safe, through public investments and democratic decision-making where our voices and expertise are centered, not law enforcement and corporate executives.”

The following can be attributed to Evan Greer, Deputy Director of Fight for the Future, (pronouns: she/her): “Any tech review site that recommends Amazon Ring is complicit in exacerbating the racist police violence and surveillance that’s getting people killed in Black and brown communities. Full stop. Recommending surveillance devices that measurably increase racial profiling is unconscionable. Product review sites do not recommend or review stalkerware used by abusers because this technology is inherently harmful and recommending it would be immoral. Amazon Ring is no different. If sites like Consumer Reports, CNET, and Wirecutter don’t rescind their recommendation of Ring, they’re saying they’re okay promoting racism and shilling for a product that’s incompatible with civil liberties and democracy.”

The following can be attributed to Color Of Change Vice President Arisha Hatch: “Surveillance technologies rely on algorithms with racial biases and privacy vulnerabilities baked into their software, posing a grave threat to Black people’s safety and wellbeing. Since 2018, Color Of Change and our millions of members have demanded that Amazon address the concerns of civil rights advocates and the larger public about the company’s attempts to peddle products, such as Ring, that enable state-sponsored discrimination and police violence against Black and brown communities.

Despite Amazon leadership’s knowledge of the dangerous consequences of their surveillance products, they continuously choose to prioritize profit over our lives by marketing these products as ‘security’ tools and building on racist fears to sell them. Amazon was quick to publicly support Black Lives Matter amid the racial justice protests last summer, but those words ring hollow in the face of their complicity in fueling discriminatory policing tools and practices.    

Given the media’s role in holding corporations accountable for unethical practices as well as journalists’ position as trusted gatekeepers of factual information, we call on CNET, Consumer Reports, Digital Trends, TechRadar, Tom’s Guide, and Wirecutter to immediately halt the promotion of Amazon’s Ring and similar facial recognition products in your respective outlets. Failure to do so will only further enable corporate giants like Amazon to abuse their power to churn profits at the expense of Black lives.”

###

14 notes

Joint Letter from 20+ racial justice and civil rights groups calling on tech review sites to stop recommending Amazon’s racist Ring cameras


Dear Editors of CNET, Consumer Reports, Digital Trends, TechRadar, Tom’s Guide, and Wirecutter,

Given Amazon’s Ring technology directly threatens Black and brown communities, 20+ racial justice, civil liberties, and privacy rights organizations are calling on you to rescind your recommendation of all Amazon Ring products.

Ring cameras surveil millions of Americans, from children playing in the park to people visiting health clinics to protesters exercising their First Amendment rights. Alongside the massive growth of this private network of cameras, the tech giant is aggressively expanding their police partnerships. With over 2,000 partnerships, Amazon’s doorbell, floodlight, mailbox, and dash cameras record and collect data on our whereabouts, our homes, and our communities. This massive surveillance dragnet poses an existential Orwellian threat to the daily lives of the public at large and to our democracy—but for Black and brown communities Amazon Ring technology could put their lives in immediate danger.

Putting Black lives in danger is part of Amazon Ring’s business model. The tech giant weaponizes racist, fear-mongering culture by using racially-coded language and dog whistles to promote Ring products and partnerships. Simultaneously, they have sold their racist Rekognition facial identification technology to police departments. Amazon marketed Rekognition to police with the full awareness of two damning facts: first, that police misuse facial recognition, and second, that Rekognition disproportionately misidentifies Black and brown people, transgender people, and women

On top of it all, Amazon’s Neighbors app is designed to gamify profiling Black and brown people via racist neighborhood surveillance. Amazon’s private surveillance network fuels the criminalization of Black and brown people by amplifying existing racism in our communities and policing––further subjecting communities of color to repressive police violence and feeding a system of mass incarceration. 

Amazon Ring is also being used to surveil, intimidate, and punish Black Lives Matter protesters. Recently, the Electronic Frontier Foundation released records obtained from Los Angeles Police Department (LAPD) showing detectives requesting footage of Black Lives Matter protests from Ring users. This video was used by detectives to identify and track protesters who took to the streets in the wake of George Floyd’s murder. LAPD did not act alone. Liaisons working for Amazon Ring helped the department send bulk footage requests to regions throughout the city.

It’s not surprising Amazon helped police use their surveillance dragnet to track down the very protesters fighting to dismantle the racist, repressive, militarized law enforcement system Amazon profits from. Roughly half of the police departments partnered with Amazon “are responsible for over a third of fatal police encounters nationwide”—a shocking statistic given that only around 7% of our nation’s police departments had a Ring partnership at the time. In one specific instance, a woman shared footage of an unidentified man on her porch on Amazon Ring’s Neighbors app, which is patrolled by police. The man was later shot by sheriff’s deputies.  

It is surprising that you continue to recommend people buy Ring products. These devices threaten Black lives–that renders them ineligible for best in their category endorsements. 

Some of the consumers using your reviews to make purchasing decisions live in Black and brown communities. They have Black and brown loved ones, undocumented family members, and activists friends. Through your recommendation, they are unknowingly tracking the people they love for police agencies. A purchase incorrectly believed to keep them and their loved ones safe actually endangers their lives. In assessing a product’s safety, it’s incumbent upon you to evaluate these harms and the negative impacts these products have on society along with the other criteria you take into consideration. 

Your outlets all declared Black Lives Matter. You have the power now to act in accordance with that belief. Rescind your recommendation of Amazon Ring cameras and update all relevant guides.

Sincerely, 

Action Center on Race and the Economy (ACRE)

Athena Coalition

Backbone Campaign

Color of Change

Demand Progress Education Fund

Fight for the Future

Institute for Local Self-Reliance

Jobs With Justice

Kairos

LAANE

Media Alliance

MediaJustice

Mijente

MPower Change

Oakland Privacy

Open MIC (Open Media & Information Companies Initiative)

Partnership for Working Families

Presente.org

Public Citizen

S.T.O.P. - The Surveillance Technology Oversight Project

Secure Justice

Threshold

###

5 notes