Argentina is using facial recognition system that tracks child suspects, Human Rights Watch says

Publishing such information violates the Convention of the Rights of the Child, a U.N. agreement to which Argentina is a signatory, that says a child’s privacy should be respected at all stages of legal proceedings, said Hye Jung Han, a researcher and advocate in the children’s rights division at Human Rights Watch, who was the lead researcher on the report.

Argentina’s embassy in Washington did not respond immediately to a request for comment.

On a visit to Argentina in May 2019, the United Nations Special Rapporteur on the right to privacy warned the Argentine government that CONARC’s database contained 61 children. By that October Argentina’s justice ministry said there was no children’s data in CONARC. But the report contends the practice continued after the U.N. visit, with 25 additional children added to the database.

An HRW review of CONARC also saw that the public information about the children was peppered with inaccuracies.

“Some children appear multiple times,” José Miguel Vivanco, director of Human Rights Watch’s Americas division, wrote Friday in a public letter of concern to Argentine President Alberto Fernández. “There are blatant typographical errors, conflicting details, and multiple national ID numbers assigned to single individuals, raising the risk of mistaken matches. In one example, a 3-year-old is listed as being wanted for aggravated robbery.”

He added that the practice of using this information for facial recognition tracking also poses huge accuracy risks, given the higher rate of misidentification of children with such technology.

“Facial recognition technology has considerably higher error rates for children, in part because most algorithms have been trained, tested and tuned only on adult faces,” Vivanco wrote. “In addition, since children experience rapid and drastic changes in their facial features as they age, facial recognition algorithms also often fail to identify a child who is a year or two older than in a reference photo.”

Even before these findings, the facial recognition software deployed in the country’s capital in April 2019 has been a source of controversy. According to city data from 2015, over 1.4 million people ride the Buenos Aires subway every day, meaning even a 0.01 percent error rate in the technology could put over a hundred people at risk of being misidentified daily.

Buenos Aires reported a 4 percent error rate in its technology between the end of April and mid-July 2019. Innocent civilians have been detained, handcuffed and brought to police stations after being wrongfully identified by the technology.

Around the world, facial recognition as a tool of law enforcement has sparked warnings and concerns from human rights activists. Some say the error rates are high enough to render the technology useless. Others say a lack of understanding of the technology can lead to irresponsible use, or that it can reinforce racial discrimination.

“This is an example of law enforcement or the government procuring technology that we don’t quite understand,” Han said.

But equally worrisome, she said, is the authorities’ apparent view that the technology would foster increased accuracy.

Police “were never told what to do in the event a facial recognition system produced an error,” Han said.

Source Article

Recommended Articles