What is AI Algorithm Bias?
AI Algorithm bias happens when AI systems produce either unfair or skewed answers that reflect societal inequalities. It can also happen when a question is phrased in a way that leads AI to provide an answer it thinks the user will want to hear. This has less to do with the human aspect of AI design and more to do with the product of a flawed system, but it is still important to address.
AI bias can be introduced during data collection and labeling, when designing the algorithm, and when interpreting and applying predictions.
Why AI Algorithm Bias Exists
AI learns from existing data. It gets its information from real people. Therefore, it often amplifies inherent biases, leading to discriminatory outcomes. The people most likely to be affected by these biases are those in marginalized groups like women, people of colour, people with disabilities, and members of the LGBTQ+ community.
Biases exist. Some are conscious and some are unconscious, but we all have them, and some have them more than others. When we rely on people to program AI tools, our biases end up permeating throughout. AI is not smart enough to discern what is biased and unbiased. It is entirely reliant on humans.

Examples of AI Algorithm Biases
In Hockey…?
Canadians love hockey. As a Winnipeg Jets fan, it was heartbreaking when they were eliminated after putting up such an incredible fight and delivering the best game of hockey in history.
For many Winnipeg Jets fans (and Canadians in general), if our team couldn’t win the Stanley Cup, we were going to root for the one Canadian team left in the playoffs: Edmonton.
Canadian pride exists on a good day. Throw hockey into the mix and it’s a different story. Many people were turning to ChatGPT to get its prediction on the outcome of the series. The problem was in how many people worded the question.
“Will Edmonton win the Stanley Cup?”
What’s the problem here? You’re simply asking it a question. But because you are asking specifically, “Will EDMONTON win the Stanley Cup?” AI is going to assume you want Edmonton to win. So, it is going to give you stats and information that is skewed in favour of Edmonton winning.
Edmonton did not win.
In AI Imagery
I was scrolling through LinkedIn recently, and someone had shared AI-generated images of what animals would look like in certain organizational roles. It's exactly the type of hard-hitting content we’re all on LinkedIn for, but that’s a topic for another blog.
The photos shared showed various jobs and roles: CEO, COO, finance manager, creative director, HR, etc. When I first came across this post, I wasn’t surprised when I clicked through and saw all the higher-up positions of the animals wearing suits (clearly depicting them as male) or distinctly male animals being represented in the top positions (a lion with a mane is clearly a male lion, and in this case was represented as the CEO). I thought it would depict female animals in lower-level positions (like HR – it’s important to note that the title wasn’t “HR director,” but simply “HR”). But to my surprise, female animals were not depicted at all. Not even in roles where I could see bias existing, like HR. They just didn’t exist in this imagery.
This is such an issue because women make up a significant portion of the workforce. Albeit a smaller number, they also exist in top-level positions. These images erased women from the workforce entirely.
What Do We Do?
It all starts with AI governance. Something that, in my opinion, is not being talked about enough. To ensure AI is being used fairly and ethically, we can start by ensuring the following practices are being implemented:
Compliance with laws and ethical standards
Trust through privacy and security
Fairness using techniques like counterfactual fairness
Transparency in data and decision-making
Human oversight to review AI decisions
Reinforcement learning to reduce human bias
While this is a start, it is not enough. It’s important to understand how AI operates and to acknowledge that it’s not a perfect solution, that it will provide incorrect answers, and that biases exist. Make sure you are fact-checking information, especially if it is going to have a direct impact on other people or you’re using it for decision-making purposes.
Want to learn how to use AI ethically and correctly? We’re here to help.



