Artificial Intelligence will now judge its human counterparts

12 May 2017 | By Anish Chakraborty
Artificial intelligence will now decide human fate

The Durham police of England are now gearing up to make judgments on whether a suspect should be kept in custody or not, depending on an AI result.

Putting human lives under the scrutiny of artificial intelligence may sound odd but with the advent of technology, this may become the norm in future to avoid man-made errors.

Here's all that you need to know.

In context: Artificial intelligence will now decide human fate

12 May 2017Artificial Intelligence will now judge its human counterparts

AlgorithmWhether suspects should be remanded or not

Harm Assessment Risk Tool (HART) is an algorithm which will now help Durham police decide whether a suspect should walk free or not.

The system will start operating within the next 2-3 months and take calls based on data collated over the years to classify the criminals at "low, medium and high risk" of doing something criminal if released.

Love Tech news?
Stay updated with the latest happenings.

Forecasting risk of future harm

"The basic logic is to use the prior histories of thousands of people arrested and processed in Durham to forecast the level of risk of high harm they will cause by criminal acts within two years after they are arrested." -Professor Lawrence Sherman.
How accurate are these predictions?

AccuracyHow accurate are these predictions?

The algorithm has been supplied with collected data of around 5 years, which includes the suspects' "offending history, gender, and postcode."

HART when tested in 2013, predicted with 98% accuracy in case of suspects with low risk and 88% accuracy when it came across high-risk suspects.

However, during that period HART's results were only monitored and not taken into account while making the judgment.

RaceWill racial bias also get into the mix?

News website ProPublica had released a report which showed another AI system, used by Florida authorities, had an increased instance of racial-bias with extreme negative forecasts towards black suspects compared to the white ones.

While HART doesn't include race when it comes to predictions of custody, officials have expressed that quasi-awareness may arise in future and certain volatile postal-codes may evoke tendencies of bias.

Bringing unwanted emotions out

"To some extent, what learning models do is bring out into the foreground hidden and tacit assumptions that have been made all along by human beings." -Prof Cary Coglianese, a political scientist at the University of Pennsylvania.
Love Tech news?
Stay updated with the latest happenings.

HARTWalking on a very thin line

On being asked how accurate HART was in making judgments, the authorities were told by the researchers that the algorithm makes use of several predictors which can't be swayed.

Moreover, HART's was more of an "advisory" role and not the final call, and if any debate arose on how the system reached a specific conclusion, an audit trail would be provided for scrutiny.

LimitationsHART is "interesting" but not error free

No doubt, HART is an "interesting" and "positive" advancement said Helen Ryan, head of law at the University of Winchester, but also cited that it is not devoid of limitations.

For instance, HART can only access data of Durham Constabulary and will, therefore, not be able to provide an accurate judgment regarding any offender with a crime history originating outside Durham Police's jurisdiction.