Studies show that human biases can be reproduced and amplified in computer models, and incomplete or unrepresentative training data can also cause those models to produce worse predictions for underrepresented groups.


To learn more about how encoded biases in artificial intelligence could lead to worse outcomes for protected groups, upcoming legislation, and international proposals on algorithmic bias, download our Algorithmic bias overview.