AI for Social Good: Developing Fair, Explainable Neural Networks Resilient to Confounding Factors and Requiring Minimal Supervision

Science / Computer Science

Neural networks largely drive the recent Artificial Intelligence revolution and are computational mechanisms inspired by the workings of the human brain. These networks, composed of hierarchically organized artificial neurons, can learn from human-annotated examples. In our research group, we are striving to address some of the primary limitations of these methods. Our goal is to understand how to construct neural networks that make fair decisions, even when the data gathered from the real world might reflect societal injustices, such as racial or gender disparities. Additionally, we are exploring ways to elucidate the decisions made by neural networks, which are often considered inscrutable “black boxes.” Lastly, we aim to determine how networks can learn in scenarios where there is a scarcity of human-annotated data, a situation that applies to the vast majority of today’s available data.

Amount invested

R$ 100,000.00