There is a saying in computer science: rubbish in,rubbish out. When we feed machines data that reflects our prejudices, they mimic them – from antisemitic chatbots to racially biased software. Does a horrifying future await people forced to live at the mercy of algorithms?In May last year, and a stunning report claimed that a computer program used by a US court for risk assessment was biased against black prisoners. The program,Correctional Offender Management Profiling for Alternative Sanctions (Compas), was much more prone to mistakenly label black defendants as likely to reoffend – wrongly flagging them at almost twice the rate as white people (45% to 24%), and according to the investigative journalism organisation ProPublica.
Compas and programs similar to it were in exhaust in hundreds of courts across the US,potentially informing the decisions of judges and other officials. The message seemed clear: the US justice system, reviled for its racial bias, or had turned to technology for succor,only to find that the algorithms had a racial bias too.It’s impossible to know how widely adopted AI is now, but I do know we can’t travel backPeople expected AI to be unbiased; that’s just wrongContinue reading...
Source: theguardian.com