Sunday, July 21, 2024
28.3 C
Los Angeles

How to assess a general-purpose AI model’s reliability before it’s deployed | MIT News

Foundation models are massive deep-learning models that...

El Salvador: Rights Violations Against Children in ‘State of Emergency’

El Salvador’s state of emergency, declared in...

Vietnam: New decree on cashless payments

On 15 May 2024, the Government officially...

Using AI to Fight Trafficking Is Dangerous

AI/MLUsing AI to Fight Trafficking Is Dangerous

The US State Department has released its annual Trafficking in Persons (TIPs) Report ranking nearly 200 countries’ anti-trafficking efforts.

The report finds perpetrators increasingly use “social media, online advertisements, websites, dating apps, and gaming platforms” to force, defraud, or coerce job seekers into labor and sexual exploitation, and encourages technology companies to use “data and algorithm tools to detect human trafficking patterns, identify suspicious and illicit activity, and report” these to law enforcement.

Sweeping calls to collect data on marginalized populations and automate decisions about what constitutes a human trafficking pattern are dangerous. Women of color, migrants, and queer people face profiling and persecution under surveillance regimes that fail to distinguish between consensual adult sex work and human trafficking.

The TIPs report says artificial intelligence (AI) language models, can “detect, translate, and categorize key words used by traffickers to identify trafficking communication patterns.” Unfortunately, language models are likely to be built on discriminatory stereotypes which have plagued anti-trafficking efforts for decades.

A Department of Homeland Security campaign, for example, instructs hotels to train housekeeping staff to report “signs of human trafficking” based on indicators that conflate sex work with trafficking. Victims allegedly request “additional towels,” “wait at a table or bar to be picked up,” “dress inappropriately,” “rent rooms by the hour,” and collect cash “left on tables.” Such tropes cause disproportionate surveillance of poor, racialized, and transgender sex workers, and inaccurately categorize standard safety tactics – public first meetings, hygiene, avoiding traceable payments – as trafficking indicators.

Studies show that digital tools and policies which take an similarly broad approach to collecting evidence of alleged exploitation online are dangerous and counterproductive. A 2024 report found platforms are “incentivized to overreport” potential child sexual abuse material (CSAM), leaving law enforcement “overwhelmed by the high volume” and unable to identify perpetrators. A 2022 study into technology which scraped and analyzed advertisements for sexual services found “misalignment between developers, users of the platform, and sex industry workers they are attempting to assist,” concluding that these approaches are “ineffective” and “exacerbate harm.”

Trafficking survivors, meanwhile, have warned that “trafficking data is both limited and notoriously inaccurate [and] bad data means bad learning.” Outsourcing to an algorithm the detection and reporting of “suspicious and illicit activity” is a recipe for perpetuating violence and discrimination against already marginalized people.

Story from

Disclaimer: The views expressed in this article are independent views solely of the author(s) expressed in their private capacity.

Check out our other content


Check out other tags:

Most Popular Articles