abstract = "Machine learning is widely used to develop classifiers
for security tasks. However, the robustness of these
methods against motivated adversaries is uncertain. In
this work, we propose a generic method to evaluate the
robustness of classifiers under attack. The key idea is
to stochastically manipulate a malicious sample to find
a variant that preserves the malicious behaviour but is
classified as benign by the classifier. We present a
general approach to search for evasive variants and
report on results from experiments using our techniques
against two PDF malware classifiers, PDFrate and
Hidost. Our method is able automatically find evasive
variants for all of the 500 malicious seeds in our
study. Our results suggest a general method for
evaluating classifiers used in security applications,
and raise serious doubts about the effectiveness of
classifiers based on superficial features in the
presence of adversaries.",
notes = "GP population size is 48 and the maximum generation is
20.