# Selected Publications

### Lower Bounds on the Robustness to Adversarial Perturbations

The input-output mappings learned by state-of-the-art neural networks are significantly discontinuous. It is possible to cause a neural network used for image recognition to misclassify its input by applying very specific, hardly perceptible perturbations to the input, called adversarial perturbations. Many hypotheses have been proposed to explain the existence of these peculiar samples as well as several methods to mitigate them. A proven explanation remains elusive, however. In this work, we take steps towards a formal characterization of adversarial perturbations by deriving lower bounds on the magnitudes of perturbations necessary to change the classification of neural networks. The bounds are experimentally verified on the MNIST and CIFAR-10 data sets.
NIPS, 2017

# Recent Publications

. Detecting adversarial manipulation using inductive Venn-ABERS predictors. Neurocomputing, 2020.

. Hardening DGA Classifiers Utilizing IVAP. IEEE Big Data, 2019.

. Calibrated Multi-Probabilistic Prediction as a Defense against Adversarial Attacks. Benelearn, 2019.

. CharBot: A Simple and Effective Method for Evading DGA Classifiers. IEEE Access, 2019.

. Detecting adversarial examples with inductive Venn-ABERS predictors. ESANN, 2019.

. Lower Bounds on the Robustness to Adversarial Perturbations. NIPS, 2017.

. Robustness of Classifiers to Adversarial Perturbations. Ghent University, Faculty of Sciences, 2017.

# Recent & Upcoming Talks

Benelearn Presentation
Nov 8, 2019 9:10 AM
ESANN Presentation
Apr 24, 2019 2:40 PM
NIPS Poster Presentation
Dec 4, 2017 6:30 PM

# Recent Posts

### On the Limits of Statistical Inference

There was an interesting post on Reddit recently where someone asked if it is possible to fit a neural network for accurately predicting the digits of $\pi$. Thinking about this question brings to light some important limitations of the current deep learning paradigm, which I hope to clarify in this post. General objective In order to learn to predict the digits of $\pi$, ideally our objective would be to construct a function $f: \mathbb{N} \to \{ 0, \dots, 9 \}$ such that $f(n)$ returns the $n$th digit in the decimal expansion of $\pi$.

### Troubling Trends in Adversarial Machine Learning

The phenomenon of adversarial examples has attracted much research effort as of late, with new defenses being proposed regularly. However, there are some troubling trends in the evaluation of these defenses which cast doubt on their effectiveness.

# Projects

#### Domain Generation Algorithms

Domain generation algorithms (DGAs) are commonly leveraged by malware to create lists of domain names which can be used for command and control (C&C) purposes. Detecting DGA domains and distinguishing them from benign traffic in real time is an important method for combating the spread of malware.

Modern machine learning models appear to be extremely sensitive to tiny perturbations to their inputs.

# Teaching

I am a teaching assistant for the following courses at Ghent University:

• Artificial Intelligence
• Computergebruik