Kate Collins in the "The New York Times" has an article on AI and the White Guy Problem. Artificial Intelligences White Guy Problem
It discusses the problems with unintended biases in the developers of various AI systems in use around the country right now. The problem seems to be that the developers of the systems don't take into account the full data needed to develop the systems to not be biased. For example: "A very serious example was revealed in an investigation published last month by ProPublica. It found that widely used software that assessed the risk of recidivism in criminals was twice as likely to mistakenly flag black defendants as being at a higher risk of committing future crimes. It was also twice as likely to incorrectly flag white defendants as low risk."
There are a number of problems similar to that discussed in the article. This sort of thing can cause a lot of trouble on down the line.