AI is increasingly used to automate the decisions that humans would normally make within corporations -- but with humans developing the software, the potential for issues is tremendous.
AI can only mimic human judgment, which means there could be considerable and widespread consequences if bias has been knowingly or unknowingly introduced. Think about the impacts of decision making in healthcare or lending, for example.
“Many worry whether the coming age of AI will bring new, faster, and frictionless ways to discriminate and divide at scale,” writes the author. This 2020 article continues to be a great resource, especially since there’s a lot of misinformation and sensationalism surrounding this topic.
Read the article
As AI continues to penetrate all aspects of decision-making that affect our lives, developers and end-users need to always be thinking about possible biases so that they can be prevented or fixed.
This article contains a real-world example of how bias crept into machine learning and impacted outcomes for more than 200 million patients. You’ll get details around how incorrect assumptions about the training data led to the machine learning model that was biased against minorities.
The author breaks it down simply, saying “algorithms still reflect the real world, which means they can unintentionally perpetuate existing inequality.” We like this article because it explores different viewpoints, not automatically concluding that AI is evil.
Read the article
Explainable AI is the ability to interpret a machine learning model’s decisions in a way that makes sense to a human. This is often easier said than done. Machine learning models by design are meant to pick up on subtle clues and correlations that a human might miss.
This brief but comprehensive explanation of explainable AI gives you the what, the why, and examples that illustrate why it’s significant.
Get informed about explainable AI
It’s no secret that analytics and BI are far behind software engineering when it comes to tests. Great Expectations seeks to provide a more rigorous framework around data quality.
Great Expectations is a Python open-source library for validating, documenting, and profiling data. Use these resources to learn more.
What’s Great Expectations all about and why is it valuable?
See Great Expectations’ documentation