Can neural networks be designed to be secure and robust to ensure safe usage?

Science / Computer Science

Adversarial attacks on neural networks exploit vulnerabilities by introducing small data perturbations that lead to incorrect outputs. These attacks can compromise systems that rely on neural networks, resulting in financial losses and potential security risks. Recent research suggests that defining neural networks using ordinary differential equations (ODEs) and satisfying dynamic system stability conditions can improve robustness against adversarial attacks. However, these studies are preliminary and have limitations. This project aims to integrate neural networks with dynamic systems principles to design a new generation of inherently secure neural networks that resist adversarial perturbations.

Amount invested

Grant Serrapilheira: R$ 600.000,00 (R$ 450.000,00 + R$ 150.000,00 optional bonuses aimed at the integration and training of individuals from underrepresented groups in science)

Open Calls

Science Call 7
  • Topics
  • Dynamic systems
  • Neural networks