[Taken] Individual Fairness through Time

Individual fairness concerns the ability of a machine learning model to not being affected in its predictions by one or more sensitive features, such as gender, race, age etc.Recent methods developed techniques for the formal analysis and approximation of fairness in the case of deep Neural Networks (NNs). However such techniques are restricted to simple … Read more

[Taken] Deep learning for physiological signal

Deep learning has recently achieved state-of-the-art performances in many computer vision classification tasks. In this project we’ll look into developing a tailored deep learning solution based on Neural Network for a problem related to classification with physiological signals. The project will go into details of architecture selection, hyper-parameter learning, validations, and best training practice of … Read more

[Taken] Fair training through uncertainty

Deep learning, in particular Neural Networks (NNs), has achieved state-of-the-art results in many applications in the last decade. However, the way these models work and operate is still not fully understood, and in many ways they are approached as black-box when deployed in practice.Unfortunately, this raises several concerns about their suitability to deal with sensitive … Read more

[Taken] Robustness of neural networks

The sudden rise of adversarial examples (i.e., input points intentionally crafted so to trick a model into misprediction) has shown that even state-of-the-art deep learning models can be extremely vulnerable to intelligent attacks. Unfortunately, the fragility of such models makes their deployment in safety-critical real-world applications (e.g., self-driving cars or eHealth) difficult to justify, hence … Read more