There’s no way around it – AI is already a major part of our lives and will continue to be in the future. Sure, it’s not the kind of AI many have worried will bring a dystopian future, outsmarting their creators. At the moment you can find it in your smartphones to improve camera performance, algorithms that recommend what to watch on Netflix, in marketing engines that sell you things you don’t need, and so on.

Read: Competition: LG Tone Ultra HBS-810 Headset Giveaway

While the functions of AI will keep evolving, some like Google CEO Sundar Pichai believes we need to put frameworks in place to regulate these systems, to avoid that pessimistic dystopian view of AI. Others believe it will lead to a utopian human existence. There is a middle ground between these two extremes, though, which may be starting to rear its head.

What if we are creating AI, through incognisant biases or wilful participation, that inherit some of the biggest flaws of human nature? Could certain preconceptions and internal prejudices be creeping their way into our technology? As a data scientist myself, these questions have always been front of mind.

With Black Lives Matter once again shedding light on the injustices the community faces, the issue is being highlighted in AI as well. And for protestors, this could spell unwarranted trouble as artificial intelligence facial recognition systems are already up and running around the world.

In a study published by Joy Buolamwini, an MIT scientist and founder of the Algorithmic Justice League, large scale biases were uncovered in systems built by industry titans. The tech giants IBM, Google and Amazon were all tested in this research and it found that male faces were identified with more accuracy than those of females. Significantly, the error rates for identifying individuals with darker skins were significantly higher than lighter skins, with darker females having by far the least accurate identification.

This could become (and has perhaps already been) a massive problem with law enforcements around the world employing this technology for surveillance and security purposes.

Other studies have found comparative results. Imedia’s facial recognition system is used by law enforcement in the US, Australia and France. At sensitivity settings where Idemia’s algorithms falsely matched different white women’s faces at a rate of one in 10,000, it falsely matched black women’s faces about once in 1,000—10 times more frequently. A one in 10,000 false match rate is often used to evaluate facial recognition systems.

These AI models aren’t necessarily being trained like this intentionally, instead there is a (mostly unknown) bias in the data. Some facial recognition software, for example, was built using photos of a skewed data set, not proportional to the overall population. Data selection could also have been a problem due to the lack of availability of a wide variety of photos.

This type of bias can also be seen in other types of data and AI models. Financial institutions use a lot of historical data to train their algorithms with millions of records which then automatically finds patterns and identifies features that could affect applications to upwardly mobile financial solutions.

Let’s say a young person has excelled and comes from a very poor neighbourhood. They worked hard to get a good education and respectable job, they have a clean credit record and now wants to buy their first property with a mortgage loan. Only using the traditional criteria that a bank would look at to decide whether or not the loan should be granted, the algorithm could decide that the risk of this potential customer is plainly too high. Why? Because of the poor neighbourhood that the person is from. The algorithm has learned that previous applications from the neighbourhood was rejected because people living there might have had bad credit records or low disposable income. Even though this is not the case for our applicant here, the fact that they come from that same neighbourhood may have cost them the loan.

Now, while this is an extreme example and most such systems have been able to iron out these issues, if the people that built the algorithms have such personal biases it could still creep into the models.

To prevent bias from becoming inherent to our AI systems, the practices that we use to build the models need to be transparent. We should be cognisant of these possible biases in the systems that we build. We should strive to build an ethical and diverse AI industry.

According to Rachel Thomas, co-founder of fast.ai, “we can’t afford to have a tech that is run by an exclusive and homogenous group creating technology that impacts us all. We need more experts about people, like human psychology, behaviour and history. AI needs more unlikely people.”