The Data Planet - Vol. 04

June 29, 2021
June 29, 2021

Whether it’s recent news or just new to you, every two weeks the Data Planet serves up fascinating insights and resources from the data analytics and BI world.

Our snack-size summaries skip straight to the point.

This week’s edition of the Data Planet includes:

  • Ethical Concerns When AI Takes Bigger Decision-Making Role in More Industries
  • Review Racial Bias in a Major Health Care Risk Algorithm
  • What is Explainable AI?
  • Software Spotlight: Great Expectations

Ethical Concerns as AI Takes Bigger Decision-Making Role in More Industries

AI is increasingly used to automate the decisions that humans would normally make within corporations -- but with humans developing the software, the potential for issues is tremendous.  

AI can only mimic human judgment, which means there could be considerable and widespread consequences if bias has been knowingly or unknowingly introduced. Think about the impacts of decision making in healthcare or lending, for example.  

“Many worry whether the coming age of AI will bring new, faster, and frictionless ways to discriminate and divide at scale,” writes the author. This 2020 article continues to be a great resource, especially since there’s a lot of misinformation and sensationalism surrounding this topic.


Read the article

Racial Bias Found in a Major Health Care Risk Algorithm

As AI continues to penetrate all aspects of decision-making that affect our lives, developers and end-users need to always be thinking about possible biases so that they can be prevented or fixed.  

This article contains a real-world example of how bias crept into machine learning and impacted outcomes for more than 200 million patients. You’ll get details around how incorrect assumptions about the training data led to the machine learning model that was biased against minorities.  

The author breaks it down simply, saying “algorithms still reflect the real world, which means they can unintentionally perpetuate existing inequality.” We like this article because it explores different viewpoints, not automatically concluding that AI is evil.


Read the article

What is Explainable AI?

Credit: XKCD

Explainable AI is the ability to interpret a machine learning model’s decisions in a way that makes sense to a human. This is often easier said than done. Machine learning models by design are meant to pick up on subtle clues and correlations that a human might miss. 

This brief but comprehensive explanation of explainable AI gives you the what, the why, and examples that illustrate why it’s significant.


Get informed about explainable AI

Software Spotlight: Great Expectations

It’s no secret that analytics and BI are far behind software engineering when it comes to tests. Great Expectations seeks to provide a more rigorous framework around data quality.

Great Expectations is a Python open-source library for validating, documenting, and profiling data. Use these resources to learn more.


What’s Great Expectations all about and why is it valuable?

See Great Expectations’ documentation

Knowledge is power, and you want that power at your fingertips.
Stay tuned for the next edition of the Data Planet.

Stay in Touch With Onebridge

* Indicates required field
Thank you for subscribing! Check your email for a confirmation and link to your profile.
Oops! Something went wrong while submitting the form.
Hey! We hope you've noticed that none of our content is "gated" - or forces you to provide information to download. We work hard to provide valuable information to serve our audience and our clients, and we are proud of it.

If you would like to be notified of new content, events, and resources from Onebridge, signup for our newsletter here. After signing up, we provide a profile link where you can tell us what topics you want to hear about. With Onebridge, you control your data.

You can also follow us on social media to see upcoming events as well as other resources like blogs, eBooks, and more!