About the Presentation
Testing for Cognitive Bias in AI Systems
Shouldn’t AI systems always produce the right answer within their problem domain? In reality, their performance is a direct result of the data used to train them.
Data collected by humans can have built-in biases. This can make it particularly hard to identify any inequality, bias or discrimination feeding into a particular decision.
You’ll learn how AI systems can suffer from the same biases as humans, how that can lead to biased results and how to develop test cases to recognize biases, both in data and the resulting system that address those biases.