Location

67.201

Start Date

5-12-2016 2:30 PM

End Date

5-12-2016 3:00 PM

Presentation Type

Paper

Description

Abstract: This paper demonstrates a system to discriminate real from fake smiles with high accuracy by sensing observers’ galvanic skin response (GSR). GSR signals are recorded from 10 observers, while they are watching 5 real and 5 posed or acted smile video stimuli. We investigate the effect of various feature selection methods on processed GSR signals (recorded features) and computed features (extracted features) from the processed GSR signals, by measuring classification performance using three different classifiers. A leave-one-observer-out process is implemented to reliably measure classification accuracy. It is found that simple neural network (NN) using random subset feature selection (RSFS) based on extracted features outperforms all other cases, with 96.5% classification accuracy on our two classes of smiles (real vs. fake). The high accuracy highlights the potential of this system for use in the future for discriminating observers’ reactions to authentic emotional stimuli in information systems settings such as advertising and tutoring systems.

Share

COinS
 
Dec 5th, 2:30 PM Dec 5th, 3:00 PM

Observer’s Galvanic Skin Response for Discriminating Real from Fake Smiles

67.201

Abstract: This paper demonstrates a system to discriminate real from fake smiles with high accuracy by sensing observers’ galvanic skin response (GSR). GSR signals are recorded from 10 observers, while they are watching 5 real and 5 posed or acted smile video stimuli. We investigate the effect of various feature selection methods on processed GSR signals (recorded features) and computed features (extracted features) from the processed GSR signals, by measuring classification performance using three different classifiers. A leave-one-observer-out process is implemented to reliably measure classification accuracy. It is found that simple neural network (NN) using random subset feature selection (RSFS) based on extracted features outperforms all other cases, with 96.5% classification accuracy on our two classes of smiles (real vs. fake). The high accuracy highlights the potential of this system for use in the future for discriminating observers’ reactions to authentic emotional stimuli in information systems settings such as advertising and tutoring systems.