Linear and Non-Linear Multimodal Fusion for Continuous Affect Estimation In-the-Wild
Created by W.Langdon from
gp-bibliography.bib Revision:1.8051
- @InProceedings{AbdGaus:2018:ieeeFG,
-
author = "Yona Falinie A. Gaus and Hongying Meng",
-
booktitle = "2018 13th IEEE International Conference on Automatic
Face Gesture Recognition (FG 2018)",
-
title = "Linear and Non-Linear Multimodal Fusion for Continuous
Affect Estimation In-the-Wild",
-
year = "2018",
-
pages = "492--498",
-
abstract = "Automatic continuous affect recognition from multiple
modality in the wild is arguably one of the most
challenging research areas in affective computing. In
addressing this regression problem, the advantages of
the each modality, such as audio, video and text, have
been frequently explored but in an isolated way. Little
attention has been paid so far to quantify the
relationship within these modalities. Motivated to
leverage the individual advantages of each modality,
this study investigates behavioural modelling of
continuous affect estimation, in multimodal fusion
approaches, using Linear Regression, Exponent Weighted
Decision Fusion and Multi-Gene Genetic Programming. The
capabilities of each fusion approach are illustrated by
applying it to the formulation of affect estimation
generated from multiple modality using classical
Support Vector Regression. The proposed fusion methods
were applied in the public Sentiment Analysis in the
Wild (SEWA) multi-modal dataset and the experimental
results indicate that employing proper fusion can
deliver a significant performance improvement for all
affect estimation. The results further show that the
proposed systems is competitive or outperform the other
state-of-the-art approaches.",
-
keywords = "genetic algorithms, genetic programming",
-
DOI = "doi:10.1109/FG.2018.00079",
-
month = may,
-
notes = "Also known as \cite{8373872}",
- }
Genetic Programming entries for
Yona Falinie Abd Gaus
Hongying Meng
Citations