Reducing bias in models    

In the current world, there are known, well-documented general biases based on gender, race, and sexual orientation. It means that the data we collect is expected to exhibit those biases unless we are dealing with an environment where an effort has been made to remove these biases before collecting the data.

All bias in algorithms is, directly or indirectly, due to human bias. Human bias can be reflected either in data used by the algorithm or in the formulation of the algorithm itself. For a typical machine learning project following the CRISP-DM (short for Cross-Industry Standard Process) lifecycle, which was explained in Chapter 5, Graph Algorithmsthe bias looks like the following:

The trickiest part of reducing bias is to first identify and locate unconscious bias. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.90.141