The pursuit of fairness in machine learning decisions is evolving, particularly in how fairness is balanced against other performance metrics like accuracy, demographic parity, equalized odds, and predictive value difference. Traditional methods often restrict to fixed trade-offs, potentially overlooking optimal fairness integrations. This thesis proposes a re-conceptualization of fairness through the lens of imitation learning, enhanced by the adoption of neural networks integrated with subdominance minimization. Unlike previous approaches that utilize logistic regression, our neural network model surpasses human decision benchmarks across multiple fairness and performance metrics. We demonstrate the superiority of this advanced model through comprehensive evaluations on standard datasets, showing significant improvements in decision fairness without compromising predictive accuracy.