Member-only story
Ensuring Fairness in Machine Learning to Advance Health Equity
Health domain has witnessed an increase in use of machine learning (ML) technology which follows the saying “garbage in garbage out”. Factors such as human and structural biases at various stages of data collection, model design and prediction interpretation have all accounted for widening the gap of health disparity among protective groups (such as African-American.)
Driving force in the article is to attain health equity via purposely designing our health and social systems that are going to be amalgamated with use of ML technologies. Therefore bringing fairness into design, deployment and evaluation of models along with encouragement from use of principles of distributive justice can advance health equity.
Unlike rule based systems, which are modular and therefore easy to edit, ML systems create their own rules (actually learn) using data which is a set of input features and labelled output. ML systems are prone to biases accustomed at time of data collection (minority, missing data, informativeness bias) or model design (labael, cohort bias).
Deployed model’s prediction interaction with clinicians could cause Automation or dismal bias while interaction with patients could cause privilege, informed mistrust or agency bias and could exacerbate health care disparities.