I don't think there exists any truly 'objective' decision, and the same goes for predictive policing. The predicting model has to base on something in order to make a prediction. In this case, previous data is its assumptions. The data gathering process itself is very bias for many reasons (demographic, lack of credible sources, etc) so the prediction could be very good on some people and bad on others. This reminds me of Apple's Face ID technology, as it was unable to tell the difference between some people. (source) It relates to my point because Face ID might work perfectly for a group of people but it might not be the case for others, in this case Asians.