How Best To Optimise Machine Learning Hyperparameters?

When designing and training a neural network model the hyperparameters include the SGD step size, mini-batch size, gradient decay policy, choice of regularisation etc. Selecting values for these hyperparameters is a key step in obtaining a useful model. While selection is commonly based on heuristics and trial and error, there is also much interest in … Read more

[Taken] Robustness of neural networks

The sudden rise of adversarial examples (i.e., input points intentionally crafted so to trick a model into misprediction) has shown that even state-of-the-art deep learning models can be extremely vulnerable to intelligent attacks. Unfortunately, the fragility of such models makes their deployment in safety-critical real-world applications (e.g., self-driving cars or eHealth) difficult to justify, hence … Read more