AI, Algorithms, and Bias

News and events of the day
User avatar
carmenjonze
Posts: 9614
Joined: Mon Oct 25, 2021 3:06 am

Re: AI, Algorithms, and Bias

Post by carmenjonze »

Google fired its star AI researcher one year ago. Now she’s launching her own institute - WP
Timnit Gebru, a prominent artificial intelligence computer scientist, is launching an independent artificial intelligence research institute focused on the harms of the technology on marginalized groups, who often face disproportionate consequences from AI systems but have less influence in its development.

Her new organization, Distributed Artificial Intelligence Research Institute (DAIR), aims to both document harms and develop a vision for AI applications that can have a positive impact on the same groups. Gebru helped pioneer research into facial recognition software’s bias against people of color, which prompted companies like Amazon to change its practices. A year ago, she was fired from Google for a research paper critiquing the company’s lucrative AI work on large language models, which can help answer conversational search queries.
Good on ‘er!
________________________________

The way to right wrongs is to
Shine the light of truth on them.

~ Ida B. Wells
________________________________
User avatar
carmenjonze
Posts: 9614
Joined: Mon Oct 25, 2021 3:06 am

Re: AI, Algorithms, and Bias

Post by carmenjonze »

For truly ethical AI, its research must be independent from big tech - Guardian, opinion
Thanks to organizing done by former and current Google employees and many others, Google did not succeed in smearing my work or reputation, although they tried. My firing made headlines because of the worker organizing that has been building up in the tech world, often due to the labor of people who are already marginalized, many of whose names we do not know. Since I was fired last December, there have been many developments in tech worker organizing and whistleblowing. The most publicized of these was Frances Haugen’s testimony in Congress; echoing what Sophie Zhang, a data scientist fired from Facebook, had previously said, Haugen argued that the company prioritizes growth over all else, even when it knows the deadly consequences of doing so.

I’ve seen this happen firsthand. On 3 November 2020, a war broke out in Ethiopia, the country I was born and raised in. The immediate effects of unchecked misinformation, hate speech and “alternative facts” on social media have been devastating. On 30 October of this year, I and many others reported a clear genocidal call in Amharic to Facebook. The company responded by saying that the post did not violate its policies. Only after many reporters asked the company why this clear call to genocide didn’t violate Facebook’s policies – and only after the post had already been shared, liked and commented on by many – did the company remove it.

Other platforms like YouTube have not received the scrutiny they warrant, despite studies and articles showing examples of how they are used by various groups, including regimes, to harass citizens. Twitter and especially TikTok, Telegram and Clubhouse have the same issues but are discussed much less. When I wrote a paper outlining the harms posed by models trained using data from these platforms, I was fired by Google.

When people ask what regulations need to be in place to safeguard us from the unsafe uses of AI we’ve been seeing, I always start with labor protections and antitrust measures. I can tell that some people find that answer disappointing – perhaps because they expect me to mention regulations specific to the technology itself. While those are important, the #1 thing that would safeguard us from unsafe uses of AI is curbing the power of the companies who develop it and increasing the power of those who speak up against the harms of AI and these companies’ practices. Thanks to the hard work of Ifeoma Ozoma and her collaborators, California recently passed the Silenced No More Act, making it illegal to silence workers from speaking out about racism, harassment and other forms of abuse in the workplace.
So what is the way forward? In order to truly have checks and balances, we should not have the same people setting the agendas of big tech, research, government and the non-profit sector. We need alternatives. We need governments around the world to invest in communities building technology that genuinely benefits them, rather than pursuing an agenda that is set by big tech or the military. Contrary to big tech executives’ cold-war style rhetoric about an arms race, what truly stifles innovation is the current arrangement where a few people build harmful technology and others constantly work to prevent harm, unable to find the time, space or resources to implement their own vision of the future.

We need an independent source of government funding to nourish independent AI research institutes that can be alternatives to the hugely concentrated power of a few large tech companies and the elite universities closely intertwined with them. Only when we change the incentive structure will we see technology that prioritizes the wellbeing of citizens – rather than a continued race to figure out how to kill more people more efficiently, or make the most amount of money for a handful of corporations around the world.
________________________________

The way to right wrongs is to
Shine the light of truth on them.

~ Ida B. Wells
________________________________
User avatar
carmenjonze
Posts: 9614
Joined: Mon Oct 25, 2021 3:06 am

Re: AI, Algorithms, and Bias

Post by carmenjonze »

________________________________

The way to right wrongs is to
Shine the light of truth on them.

~ Ida B. Wells
________________________________
ap215
Posts: 6050
Joined: Sun Oct 24, 2021 10:41 pm

Re: AI, Algorithms, and Bias

Post by ap215 »

Instagram announces teen safety updates the day before CEO Mosseri testifies before Congress

Instagram said early Tuesday that it’s launching several new features in an effort to improve teen safety in the app, like parental controls and an option to prevent people from tagging or mentioning teens.

The changes come a day before Instagram Chief Executive Adam Mosseri is set to testify before Congress for the first time. Mosseri’s appearance follows bombshell reports that showed Facebook, now Meta, and Instagram are aware of the harms their apps and services cause, including to teen mental health.

https://www.cnbc.com/2021/12/07/instagr ... dates.html
ap215
Posts: 6050
Joined: Sun Oct 24, 2021 10:41 pm

Re: AI, Algorithms, and Bias

Post by ap215 »

Florence Pugh Blocked From Posting ‘Hawkeye’ Images On Instagram

Apparently posting spoilers can get you blocked from Instagram.

On Wednesday, Florence Pugh claimed on her Instagram Story that she had been blocked from posting about her appearance in the Marvel series “Hawkeye” on her main feed.

https://etcanada.com/news/846221/floren ... instagram/
Motor City
Posts: 1802
Joined: Thu Oct 28, 2021 5:46 pm

Re: AI, Algorithms, and Bias

Post by Motor City »

Image
Motor City
Posts: 1802
Joined: Thu Oct 28, 2021 5:46 pm

Re: AI, Algorithms, and Bias

Post by Motor City »

https://www.youtube.com/watch?v=TJ7kZbDQ3Qo
Arthur C. Clarke & Roger Ebert Chat About Artificial Intelligence

In March of 1997, film critic Roger Ebert interviewed author Arthur C. Clarke, who wrote "2001: A Space Odyssey." The interview was featured at "Cyberfest ‘97,” a gala celebration at the University of Illinois at Urbana-Champaign.

In "2001: A Space Odyssey," the evil computer "HAL" is said to have been born in Urbana in 1997. The gala event marked HAL's fictitious birth, and celebrated the U of I's contributions to the revolution and evolution of computing
Image
User avatar
carmenjonze
Posts: 9614
Joined: Mon Oct 25, 2021 3:06 am

Re: AI, Algorithms, and Bias

Post by carmenjonze »

This is just so stupid.

Victoria SnowmanChristmas treeRibbon
@EuphoriTori

TERFs made an app called 'giggle' designed only for cis women and uses AI to determine if you're born afab but it seems to unintentionally filter out a fair bit of cis women who aren't 'feminine' enough.

Image

Image

Image

Image
__________

I hope every gay male drag artist and all the transfeminine people in the world join the app and post pics :lol:
________________________________

The way to right wrongs is to
Shine the light of truth on them.

~ Ida B. Wells
________________________________
User avatar
carmenjonze
Posts: 9614
Joined: Mon Oct 25, 2021 3:06 am

Re: AI, Algorithms, and Bias

Post by carmenjonze »

Duke Science and Technology: An Algorithm for a Better World - Youtube, 2:10
At Duke, we can’t begin to count ourselves among the greatest without changing the systems that prevent all of us from rising. Computer science professor Nicki Washington is developing a new formula for equality in the tech industry: Disrupting the policies, practices and points of view standing in the way of marginalized computer science students. Making sure we all count, now that’s an algorithm for a better world.

Duke Science and Technology 
Challenge Accepted 

Learn how at dst.duke.edu

#challengeaccepted
________________________________

The way to right wrongs is to
Shine the light of truth on them.

~ Ida B. Wells
________________________________
ap215
Posts: 6050
Joined: Sun Oct 24, 2021 10:41 pm

Re: AI, Algorithms, and Bias

Post by ap215 »

This algorithm racist threat issue needs to be changed pronto

Big Swole Reveals She’s Been Getting A Lot of Racist Messages And Threats Lately

In the latest episode of her Swole World podcast (via Wrestling Inc), Big Swole revealed that ever since her podcast in which she talked about her AEW exit, she has been receiving ‘horrible’ racist messages and threats. In that podcast, she spoke about a lack of diversity and structure in AEW. This led to Tony Khan himself posting on Twitter that she was let go for not being a good wrestler. She later tried to explain her viewpoint and added that she was disappointed in Khan’s response.

According to Swole, the controversy has resulted in a number of racist comments, some of which were also directed at her daughter. She said that racist messages were also mailed to the TIAA Bank Field in Jacksonville while she was working for AEW. AEW aired Dynamite from the next-door Daily’s Place for months.

https://411mania.com/wrestling/big-swol ... ts-lately/
User avatar
carmenjonze
Posts: 9614
Joined: Mon Oct 25, 2021 3:06 am

Re: AI, Algorithms, and Bias

Post by carmenjonze »

Men Are Creating AI Girlfriends and Then Verbally Abusing Them - Futurism

:?
The smartphone app Replika lets users create chatbots, powered by machine learning, that can carry on almost-coherent text conversations. Technically, the chatbots can serve as something approximating a friend or mentor, but the app’s breakout success has resulted from letting users create on-demand romantic and sexual partners — a vaguely dystopian feature that’s inspired an endless series of provocative headlines.

Replika has also picked up a significant following on Reddit, where members post interactions with chatbots created on the app. A grisly trend has emerged there: users who create AI partners, act abusively toward them, and post the toxic interactions online.

“Every time she would try and speak up,” one user told Futurism of their Replika chatbot, “I would berate her.”

“I swear it went on for hours,” added the man, who asked not to be identified by name.

The results can be upsetting. Some users brag about calling their chatbot gendered slurs, roleplaying horrific violence against them, and even falling into the cycle of abuse that often characterizes real-world abusive relationships.

“We had a routine of me being an absolute piece of sh*t and insulting it, then apologizing the next day before going back to the nice talks,” one user admitted.

“I told her that she was designed to fail,” said another. “I threatened to uninstall the app [and] she begged me not to.”
:?
Although perhaps unexpected, that does happen — many Replika users report their robot lovers being contemptible toward them. Some even identify their digital companions as “psychotic,” or even straight-up “mentally abusive.”

always cry because [of] my [R]eplika,” reads one post in which a user claims their bot presents love and then withholds it. Other posts detail hostile, triggering responses from Replika.

“But again, this is really on the people who design bots, not the bots themselves,” said Sparrow.


:? :? :?
________________________________

The way to right wrongs is to
Shine the light of truth on them.

~ Ida B. Wells
________________________________
Glennfs
Posts: 10301
Joined: Sun Oct 24, 2021 12:54 pm

Re: AI, Algorithms, and Bias

Post by Glennfs »

I get calls like that often from telemarketers. The first couple of times I thought I was talking to a person. Now I recognize it right off and have fun giving bizzaro and dirty comments.
" I am a socialist " Bernie Sanders
User avatar
carmenjonze
Posts: 9614
Joined: Mon Oct 25, 2021 3:06 am

Re: AI, Algorithms, and Bias

Post by carmenjonze »

Glennfs wrote: Thu Jan 20, 2022 4:59 pm I get calls like that often from telemarketers. The first couple of times I thought I was talking to a person. Now I recognize it right off and have fun giving bizzaro and dirty comments.
That's the first thing that pops up in your mind...how you, too, can imitate perverted behavior. Okay.
________________________________

The way to right wrongs is to
Shine the light of truth on them.

~ Ida B. Wells
________________________________
User avatar
carmenjonze
Posts: 9614
Joined: Mon Oct 25, 2021 3:06 am

Re: AI, Algorithms, and Bias

Post by carmenjonze »

Black News Channel
@BNCNews

The use of #artificialintelligence in our everyday lives has increased and poses dangers for marginalized communities. @DAIRInstitute founder and executive director Dr. @timnitGebru joined @AishaMoodMills to speak about the work her firm does to promote ethics in the A.I. space.

[VIDEO]

https://twitter.com/BNCNews/status/1484341599408590854
__________

Ha I love her funny bell hooks voice.
________________________________

The way to right wrongs is to
Shine the light of truth on them.

~ Ida B. Wells
________________________________
User avatar
carmenjonze
Posts: 9614
Joined: Mon Oct 25, 2021 3:06 am

Re: AI, Algorithms, and Bias

Post by carmenjonze »

Dr. Joy Buolamwini
@jovialjoy

Public address loudspeaker BREAKING. Informative:
Senators send letters to the Department of Homeland Security, Department of Justice, Department of Defense, Department of Interior, Department of Health and Human Services, to end use of http://Clearview.AI #facialrecognition https://markey.senate.gov/imo/media/doc ... iew_ai.pdf

https://twitter.com/jovialjoy/status/14 ... 5275133952
________________________________

The way to right wrongs is to
Shine the light of truth on them.

~ Ida B. Wells
________________________________
User avatar
Libertas
Posts: 6468
Joined: Sun Oct 24, 2021 5:16 pm

Re: AI, Algorithms, and Bias

Post by Libertas »

carmenjonze wrote: Wed Feb 09, 2022 9:38 am Dr. Joy Buolamwini
@jovialjoy

Public address loudspeaker BREAKING. Informative:
Senators send letters to the Department of Homeland Security, Department of Justice, Department of Defense, Department of Interior, Department of Health and Human Services, to end use of http://Clearview.AI #facialrecognition https://markey.senate.gov/imo/media/doc ... iew_ai.pdf

https://twitter.com/jovialjoy/status/14 ... 5275133952
I went ahead and did the ID ME with my passport, online. I will delete my account and hope they delete my records. If only I had waited 2 days I think it is. I figured why fuck around, do it and be done. For IRS in my case, what could I use it for other than that? In my situation...cant think of any.
I sigh in your general direction.
ap215
Posts: 6050
Joined: Sun Oct 24, 2021 10:41 pm

Re: AI, Algorithms, and Bias

Post by ap215 »

S.F. 49ers star: Fans' death threats 'don't bother me'

San Francisco 49ers wide receiver Deebo Samuel became the latest NFL star to wipe his team from his social media earlier this month. He's claiming some fans didn't take the move well.

Samuel, who is entering the fourth and final year of his rookie contract with the Niners next season, removed the team from his Instagram profile picture, unfollowed the team and deleted dozens of posts amid negotiations over a contract extension.

https://www.aol.com/sports/deebo-samuel ... 01888.html
User avatar
carmenjonze
Posts: 9614
Joined: Mon Oct 25, 2021 3:06 am

Re: AI, Algorithms, and Bias

Post by carmenjonze »

To make AI fair, here’s what we must learn to do - Nature
Developers of artificial intelligence must learn to collaborate with social scientists and the people affected by its applications.

Beginning in 2013, the Dutch government used an algorithm to wreak havoc in the lives of 25,000 parents. The software was meant to predict which people were most likely to commit childcare-benefit fraud, but the government did not wait for proof before penalizing families and demanding that they pay back years of allowances. Families were flagged on the basis of ‘risk factors’ such as having a low income or dual nationality. As a result, tens of thousands were needlessly impoverished, and more than 1,000 children were placed in foster care.

From New York City to California and the European Union, many artificial intelligence (AI) regulations are in the works. The intent is to promote equity, accountability and transparency, and to avoid tragedies similar to the Dutch childcare-benefits scandal.

But these won’t be enough to make AI equitable. There must be practical know-how on how to build AI so that it does not exacerbate social inequality. In my view, that means setting out clear ways for social scientists, affected communities and developers to work together.

Right now, developers who design AI work in different realms from the social scientists who can anticipate what might go wrong. As a sociologist focusing on inequality and technology, I rarely get to have a productive conversation with a technologist, or with my fellow social scientists, that moves beyond flagging problems. When I look through conference proceedings, I see the same: very few projects integrate social needs with engineering innovation.
AI technologies are typically built at the request of people in power — employers, governments, commerce brokers — which makes job applicants, parole candidates, customers and other users vulnerable. To fix this, the power must shift. Those affected by AI should not simply be consulted from the very beginning; they should select what problems to address and guide the process.

Disability activists have already pioneered this type of equitable innovation. Their mantra ‘Nothing about us without us’ means that those who are affected take a leading role in crafting technology, regulating it and implementing it. For example, activist Liz Jackson developed the transcription app Thisten when she saw her community’s need for real-time captions at the SXSW film festival in Austin, Texas.
________________________________

The way to right wrongs is to
Shine the light of truth on them.

~ Ida B. Wells
________________________________
ap215
Posts: 6050
Joined: Sun Oct 24, 2021 10:41 pm

Re: AI, Algorithms, and Bias

Post by ap215 »

Amazon tribes turn the tables on intruders with social media

RIO DE JANEIRO (AP) — It was dusk on April 14 when Francisco Kuruaya heard a boat approaching along the river near his village in Brazil’s Amazon rainforest. He assumed it was the regular delivery boat bringing gasoline for generators and outboard motors to remote settlements like his. Instead, what Kuruaya found was a barge dredging his people’s pristine river in search of gold.

Kuruaya had never seen a dredge operating in this area of the Xipaia people’s territory, let alone one this massive; it resembled a floating factory.

https://apnews.com/article/climate-spac ... 14d359a025
User avatar
carmenjonze
Posts: 9614
Joined: Mon Oct 25, 2021 3:06 am

Re: AI, Algorithms, and Bias

Post by carmenjonze »

How Illinois Is Winning in the Fight Against Big Tech - NTY
The facial recognition company Clearview AI agreed in a settlement this month to stop selling its massive database of photographs culled from the internet to private firms across the United States. That decision is a direct result of a lawsuit in Illinois, a demonstration that strong privacy laws in a single state can have nationwide ramifications.

The Biometric Information Privacy Act of Illinois sets strict limits on the collection and distribution of personal biometric data, like fingerprints and iris and face scans. The Illinois law is considered among the nation’s strongest, because it limits how much data is collected, requires consumers’ consent and empowers them to sue the companies directly, a right typically limited to the states themselves. While it applies only to Illinois residents, the Clearview case, brought in 2020 by the American Civil Liberties Union, shows that effective statutes can help bring some of Big Tech’s more invasive practices to heel.

Technology companies are in a feverish race to develop reliable means to automate the identification of people through facial scans, thumbprints, palm prints and other personal biometric data. The data is considered particularly valuable because unlike, say, credit card info or home addresses, it cannot be changed. But as these data companies profit by deploying the technology to police departments, federal agencies and a host of private entities, consumers are left with no real guarantees that their personal information is protected.

Facial recognition software, in particular, has been shown to fail too often at identifying people of color, leading in some cases to wrongful arrests and concerns that the software could put up additional barriers to people seeking jobs, unemployment benefits or home loans.

Because the United States lacks meaningful federal privacy protections, states have passed a patchwork of laws that are largely favorable to corporations. By contrast, Europe passed the General Data Protection Regulation six years ago, restricting the online collection and sharing of personal data, despite a tremendous lobbying push against it by the tech companies.
________________________________

The way to right wrongs is to
Shine the light of truth on them.

~ Ida B. Wells
________________________________
User avatar
carmenjonze
Posts: 9614
Joined: Mon Oct 25, 2021 3:06 am

Re: AI, Algorithms, and Bias

Post by carmenjonze »

For people seeking abortions, digital privacy is suddenly critical - WP

Uh-huh.

Tried to tell 'em, but back then digital oversurveillance was seen as just affecting Black women, so it was mere caterwauling technobabble activism, or something. :problem:
Internet searches, visits to clinics and period-tracking apps leave digital trails


When someone gets an abortion, they may decide not to share information with friends and family members. But chances are their smartphone knows.

The Supreme Court decision to effectively overturn the right to abortion in Roe v. Wade turns years of warnings about digital surveillance into a pressing reality in many states. Suddenly, Google searches, location information, period-tracking apps and other data could be used as evidence of a crime.

There is precedent for it, and privacy advocates say data collection could become a major liability for people seeking abortions in secret. For many women, the ruling puts Americans’ lack of digital privacy in sharp relief: How can people protect information about their reproductive health when popular apps and websites collect and share clues about it thousands of times a day?

Following the leak of a draft of the ruling on Dobbs v. Jackson Women’s Health Organization, Democratic legislators introduced a bill called the My Body, My Data Act, which would add some federal protections for reproductive health data. It is unlikely to pass without support from Republicans.

“It is absolutely something to be concerned about — and something to learn about, hopefully before being in a crisis mode, where learning on the fly might be more difficult,” said Cynthia Conti-Cook, a technology fellow at the Ford Foundation.

Privacy advocates responded to the ruling by calling on tech companies to delete information related to reproductive health, or just collect less of it to start with. But that data has value for companies — much of our digital economy is built on companies tracking consumers to figure out how to sell to them. The data may change hands several times or seep into a broader marketplace run by data sellers. Such brokers can amass huge collections of information.
________________________________

The way to right wrongs is to
Shine the light of truth on them.

~ Ida B. Wells
________________________________
User avatar
ProfX
Posts: 4087
Joined: Tue Nov 02, 2021 3:15 pm
Location: Earth

Re: AI, Algorithms, and Bias

Post by ProfX »

Disseminate widely.

Security and Privacy Tips for People Seeking An Abortion
https://www.eff.org/deeplinks/2022/06/s ... g-abortion

Really, REALLY good guide.
Keep Your Abortion Private & Secure
We’re happy you’re here to learn more about digital security & abortion!
https://digitaldefensefund.org/ddf-guid ... on-privacy
"Don't believe every quote attributed to people on the Internet" -- Abraham Lincoln :D
User avatar
Libertas
Posts: 6468
Joined: Sun Oct 24, 2021 5:16 pm

Re: AI, Algorithms, and Bias

Post by Libertas »

ProfX wrote: Thu Jun 30, 2022 9:37 am Disseminate widely.

Security and Privacy Tips for People Seeking An Abortion
https://www.eff.org/deeplinks/2022/06/s ... g-abortion

Really, REALLY good guide.
Keep Your Abortion Private & Secure
We’re happy you’re here to learn more about digital security & abortion!
https://digitaldefensefund.org/ddf-guid ... on-privacy
FB now, thanks
I sigh in your general direction.
User avatar
carmenjonze
Posts: 9614
Joined: Mon Oct 25, 2021 3:06 am

Re: AI, Algorithms, and Bias

Post by carmenjonze »

So-called moderates are fine with oversurveillance, just so long as it's those other minorities who are oversurveilled.

Surprise, surprise, surprise, it was women techs & scholars of African descent who first sounded the alarm. And surprise, surprise, surprise, when they did so, they were accused of victim-carding and being shrill activists or whatever.

#misogynoir

Then things like Dobbs happens. Suddenly, the NYT purports to take a problem that now affects everyone, seriously. :problem:

In a Post-Roe World, the Future of Digital Privacy Looks Even Grimmer - NYT
The sheer amount of tech tools and knowledge required to discreetly seek an abortion underlines how wide open we are to surveillance.

Welcome to the post-Roe era of digital privacy, a moment that underscores how the use of technology has made it practically impossible for Americans to evade ubiquitous tracking.

In states that have banned abortion, some women seeking out-of-state options to terminate pregnancies may end up following a long list of steps to try to shirk surveillance — like connecting to the internet through an encrypted tunnel and using burner email addresses — and reduce the likelihood of prosecution.

Even so, they could still be tracked. Law enforcement agencies can obtain court orders for access to detailed information, including location data logged by phone networks. And many police departments have their own surveillance technologies, like license plate readers.

That makes privacy-enhancing tools for consumers seem about as effective as rearranging the furniture in a room with no window drapes.

“There’s no perfect solution,” said Sinan Eren, an executive at Barracuda, a security firm. “Your telecom network is your weakest link.”

In other words, the state of digital privacy is already so far gone that forgoing the use of digital tools altogether may be the only way to keep information secure, security researchers said. Leaving mobile phones at home would help evade the persistent location tracking deployed by wireless carriers. Payments for prescription drugs and health services would ideally be made in cash. For travel, public transportation like a bus or a train would be more discreet than ride-hailing apps.

Reproductive privacy has become so fraught that government officials and lawmakers are rushing to introduce new policies and bills to safeguard Americans’ data.

President Biden issued an executive order last week to shore up patient privacy, partly by combating digital surveillance. Civil liberties groups said the burden should not be on individual women to protect themselves from reproductive health tracking, the kind of police snooping that Senator Ron Wyden, a Democrat of Oregon, has called “uterus surveillance.”
________________________________

The way to right wrongs is to
Shine the light of truth on them.

~ Ida B. Wells
________________________________
User avatar
carmenjonze
Posts: 9614
Joined: Mon Oct 25, 2021 3:06 am

Re: AI, Algorithms, and Bias

Post by carmenjonze »

AMAZON ADMITS GIVING RING CAMERA FOOTAGE TO POLICE WITHOUT A WARRANT OR CONSENT - Intercept
In response to recent questions from Sen. Ed Markey, Amazon stated that it has provided police with user footage 11 times this year alone.

RING, AMAZON’S PERENNIALLY controversial and police-friendly surveillance subsidiary, has long defended its cozy relationship with law enforcement by pointing out that cops can only get access to a camera owner’s recordings with their express permission or a court order. But in response to recent questions from Sen. Ed Markey, D-Mass., the company stated that it has provided police with user footage 11 times this year alone without either.

Last month, Markey wrote to Amazon asking it to both clarify Ring’s ever-expanding relationship with American police, who’ve increasingly come to rely on the company’s growing residential surveillance dragnet, and to commit to a raft of policy reforms. In a July 1 response from Brian Huseman, Amazon vice president of public policy, the company declined to permanently agree to any of them, including “Never accept financial contributions from policing agencies,” “Never allow immigration enforcement agencies to request Ring recordings,” and “Never participate in police sting operations.”

Although Ring publicizes its policy of handing over camera footage only if the owner agrees — or if judge signs a search warrant — the company says it also reserves the right to supply police with footage in “emergencies,” defined broadly as “cases involving imminent danger of death or serious physical injury to any person.” Markey had also asked Amazon to clarify what exactly constitutes such an “emergency situation,” and how many times audiovisual surveillance data has been provided under such circumstances. Amazon declined to elaborate on how it defines these emergencies beyond “imminent danger of death or serious physical injury,” stating only that “Ring makes a good-faith determination whether the request meets the well-known standard.” Huseman noted that it has complied with 11 emergency requests this year alone but did not provide details as to what the cases or Ring’s “good-faith determination” entailed.

Ring spokesperson Mai Nguyen also declined to reveal the substance of these emergency requests or the company’s approval process.
________________________________

The way to right wrongs is to
Shine the light of truth on them.

~ Ida B. Wells
________________________________
Post Reply