Many classifiers struggle when confronted with large input feature spaces like the $12.470$ feature dimensions provided for the challenge. This is due to most features not significantly contributing to the prediction. Hence, these superfluous features can be considered as noise and being mostly harmful to the classification. To alleviate this problem and reduce the amount of features, we propose to use of \acfp{GA}. We show that the \ac{GA} significantly reduces the amount of features and that a \ac{SVM} trained on the best subset is able to achieve an \acs{UAR} of $0.6524$ and an accuracy of $66.89\%$ for the development set on the 3-class problem. Furthermore, we tested the performance of four other classifiers on the same data set, namely \acp{DNN}, \acp{GBM}, \acp{RF}, and regularized regression, and evaluated a phoneme-based feature extraction in which the \ac{LLD} features are motivated by past challenge feature sets. The additional feature set is evaluated in isolation, achieving an \acs{UAR} of $x$, and in combination with the challenge features, resulting in an \ac{UAR} of $y$
