The input-output mappings learned by state-of-the-art neural networks are significantly discontinuous. It is possible to cause a neural network used for image recognition to misclassify its input by applying very specific, hardly perceptible perturbations to the input, called adversarial perturbations. Many hypotheses have been proposed to explain the existence of these peculiar samples as well as several methods to mitigate them. A proven explanation remains elusive, however. In this work, we take steps towards a formal characterization of adversarial perturbations by deriving lower bounds on the magnitudes of perturbations necessary to change the classification of neural networks. The bounds are experimentally verified on the MNIST and CIFAR-10 data sets.

NIPS,
2017

Jonathan Peck, Bart Goossens, Yvan Saeys.
Detecting adversarial manipulation using inductive Venn-ABERS predictors.
Neurocomputing,
2020.

Charles Grumer, Jonathan Peck, Femi Olumofin, Anderson Nascimento, Martine De Cock.
Hardening DGA Classifiers Utilizing IVAP.
IEEE Big Data,
2019.

Jonathan Peck, Bart Goossens, Yvan Saeys.
Calibrated Multi-Probabilistic Prediction as a Defense against Adversarial Attacks.
Benelearn,
2019.

Jonathan Peck, Claire Nie, Raaghavi Sivaguru, Charles Grumer, Femi Olumofin, Bin Yu, Anderson Nascimento, Martine De Cock.
CharBot: A Simple and Effective Method for Evading DGA Classifiers.
IEEE Access,
2019.

Jonathan Peck, Bart Goossens, Yvan Saeys.
Detecting adversarial examples with inductive Venn-ABERS predictors.
ESANN,
2019.

Jonathan Peck, Joris Roels, Bart Goossens, Yvan Saeys.
Lower Bounds on the Robustness to Adversarial Perturbations.
NIPS,
2017.

Jonathan Peck, Joris Roels, Bart Goossens, Yvan Saeys.
Robustness of Classifiers to Adversarial Perturbations.
Ghent University, Faculty of Sciences,
2017.

There was an interesting post on Reddit recently where someone asked if it is possible to fit a neural network for accurately predicting the digits of $\pi$. Thinking about this question brings to light some important limitations of the current deep learning paradigm, which I hope to clarify in this post.
General objective In order to learn to predict the digits of $\pi$, ideally our objective would be to construct a function $f: \mathbb{N} \to \{ 0, \dots, 9 \}$ such that $f(n)$ returns the $n$th digit in the decimal expansion of $\pi$.

The phenomenon of adversarial examples has attracted much research effort as of late, with new defenses being proposed regularly. However, there are some troubling trends in the evaluation of these defenses which cast doubt on their effectiveness.

Domain generation algorithms (DGAs) are commonly leveraged by malware to create lists of domain names which can be used for command and control (C&C) purposes. Detecting DGA domains and distinguishing them from benign traffic in real time is an important method for combating the spread of malware.

Modern machine learning models appear to be extremely sensitive to tiny perturbations to their inputs.

I am a teaching assistant for the following courses at Ghent University:

- Artificial Intelligence
- Computergebruik