Digital forensics, including facial recognition and automatic number plated recognition technologies for uses posing an unacceptable risk to human rights must be prohibited.
The authorised professional practice acknowledges that the police data which the algorithm is trained on is 'potentially' unrepresentative, but does not put forward remedies for overcoming this issue. An algorithmic transparency report and conducting an evaluation of the data-driven technology are not adequate measures for addressing this issue. In a context where it has been widely reported that marginalised and racialised communities are overpoliced and disproportionately likely to be the target of police powers, these demographics in turn are disproportionately represented in police data, skewing and biasing the data which then goes on to form the basis upon which the algorithm is trained.
This poses a serious risk to the individual’s right not to be discriminated against on the basis of protected characteristics as it
could lead to unfair outcomes for individuals, particularly those from marginalized communities who are already disproportionately impacted by biases in the CJ [criminal justice] system (Dukes & Kahn, 2017 cited in Dement & Inglis, 2024).
For a full explanation of our position, please download and read our briefing.