Elsevier

Neurocomputing

Volume 207, 26 September 2016, Pages 568-579
Neurocomputing

Surface EMG based handgrip force predictions using gene expression programming

https://doi.org/10.1016/j.neucom.2016.05.038Get rights and content

Abstract

The main objective of this study is to precisely predict muscle forces from surface electromyography (sEMG) for hand gesture recognition. A robust variant of genetic programming, namely Gene Expression Programming (GEP), is utilized to derive a new empirical model of handgrip sEMG–force relationship. A series of handgrip forces and corresponding sEMG signals were recorded from 6 healthy male subjects and during 4 levels of percentage of maximum voluntary contraction (%MVC) in experiments. Using one-way ANOVA with multiple comparisons test, 10 features of the sEMG time domain were extracted from homogeneous subsets and used as input vectors. Subsequently, a handgrip force prediction model was developed based on GEP. In order to compare the performance of this model, other models based on a back propagation neural network and a support vector machine were trained using the same input vectors and data sets. The root mean square error and the correlation coefficient between the actual and predicted forces were calculated to assess the performance of the three models . The results show that the GEP model provide the highest accuracy and generalization capability among the studied models. It was concluded that the proposed GEP model is relatively short, simple and excellent for predicting handgrip forces based on sEMG signals.

Introduction

Hand gesture recognition refers to the process of understanding and classifying meaningful movements by a human׳s hands [1]. As an interaction technique, it can potentially deliver more natural, creative and intuitive methods for human–machine interaction (HMI) and human–computer interaction (HCI) [2]. In recent years, hand gesture recognition and analysis have become important areas of natural HCI for various applications which range from sign language recognition through medical rehabilitation and prosthesis to virtual reality. Although much progress has been made, identifying force variation in hand gestures remains a difficult task.

Several methods have been proposed for the automatic recognition of hand gestures. The most common methods are based on computer vision [2]. Vision-based methods classify hand gestures into two types: static and dynamic gestures [3]. The two major categories of vision-based gesture representation are three-dimensional (3D) model-based [4] and appearance-based methods [5]. 3D-textured volumetric, 3D geometric model, and 3D skeleton model are main techniques for the model-based gesture representation, whereas appearance-based gesture representation includes color-based model [6], silhouette geometry model, deformable gabarit model, and motion-based model [7]. However, computer-vision based methods have several drawbacks: vision based devices, though user friendly, suffer from configuration complexity and occlusion problems [2]; the recognition performance depends on the quality of the images or videos and is vulnerable to factors such as camera angle, background and lighting, which make it difficult to detect subtle finger or hand movements [8], and especially force variation.

In recent years, surface electromyography (sEMG), which reflects to some extent the underlying neuromuscular activity [9], has so far been widely used in novel human computer interfaces [10], [11] for recognition of hand gestures [12], [13], speech [14], sign languages [15], [16], movements of upper and lower limbs [17], [18], body language [19], [20] and emotional expressions [21], [22]. Utilizing multi-channel EMG signals, sEMG-based gesture recognition techniques are capable of containing rich information about hand gestures of various sizes and identifying subtle finger and hand movements ignored by vision-based techniques [23]. For instance, Wheeler and Jorgensen [24] recognized the hand movement corresponding to the use of a virtual joystick and virtual numeric keypad by sEMG signals collected from four and eight channels on the forearm. Chen et al. [12] performed experiments with two-channel sEMG sensors measuring the activities of forearm muscles in 24 different hand gestures consisting of various motions of wrist and fingers. Naik et al. [25] proposed a method for subtle hand gesture identification from sEMG of the forearm by decomposing the signal into components originating from different muscles. Compared to the computer-vision based method, the sEMG-based method has several other advantages: it allows the non-invasive recording of muscle activity, senses muscle action directly, is sensitive to minute muscle movements, is largely uninfluenced by hand movements, and provides non-visual information about hand gestures [19], [26].

The use of the sEMG has drawn great attention as a control source for intelligent exoskeletons and prostheses during the past six decades [27], [28], [29], [30], [31], [32], [33], [34], [35], [36], [37], [38]. Advanced control techniques such as Pattern Recognition (PR) techniques [35], [39], [40] and regression techniques [39], [41] were also proposed and investigated. However, several practical limitations, such as different arm positions or arm postures [42], [43], [44], [45], electrode shift [46], [47], signal non-stationarity [48], and force variation [35] may still affect the clinical applicability of sEMG-based exoskeletons and prostheses.

Currently, significant research has been devoted to the problem of force variations, as these variations can have a substantial impact on the robustness of the control of the prostheses. In 2011, Scheme and Englehart [35] mentioned that the intensity of movement performed at different force levels may be very different from one another, therefore, it presents a challenge to a PR system. In 2013, Al-Timemy et al. [49] also indicated that changes of the force level may degrade the accuracy of the myoelectric control system by up to 60%. In 2015, Tang et al. [50] demonstrated that the force variation can have a substantial impact on the performance of elbow angle estimation, and proposed three methods to reduce the effect of force variation. In the same year, Al-Timemy et al. [51] investigated the problem of achieving robust control of hand prostheses by the EMG of transradial amputees in the presence of six classes of movement, each with three force levels, in the PR system. However, increasing the number of patterns in PR systems could improve the approximation, but it usually leads to more complex classifiers, more complicated and longer training processes, and deterioration of classification accuracy [52].

Regression techniques [53] can be applied to achieve an independent proportional and simultaneous control (generated force outputs), which is one of the most significant challenges for a multifunction prosthesis in a more natural and intuitive manner [54]. Jiang et al. [52] applied a semi-supervised algorithm (DOF-wise NMF algorithm), where only information about three active degrees of freedom and desired direction are needed to learn the relationship between muscle forces and EMG features. Nielsen et al. [54] estimated the forces from EMG signals using Artificial Neural Networks (ANN) trained with force labels from the contra-lateral hand. In their studies, the wrist was selected as the joint of interest. However, in a handgrip (power grip) the wrist is stabilized and all fingers are engaged [55]. For a cylindrical grasp, a device with one degree of freedom (DOF) only would be theoretically sufficient [56]. In our study, the movement of hand grip was selected, as there is a great demand for handgrip force estimation in commercial myocontrol systems, as well as in HMI, HCI and ergonomics in general.

Duque et al. [57] described that handgrip forces can be indirectly assessed using sEMG of forearm muscles. It is clear that a more sophisticated means of discriminating different handgrip forces is needed. As mentioned in [58], two things are needed for this to be possible: feature extraction and prediction model construction. Nielsen et al. [54] suggested that any algorithm that enables learning the association between features of the sEMG and the produced forces should be adopted. Tang et al. [50] also proposed that the forces vary in a large range in practical use, thus expansion of the training pool and application of autonomic learning algorithms are two possible solutions. There have been several scientific efforts aiming to apply machine learning algorithms to investigate the handgrip sEMG–force relationship. For example, Marco et al. [59] presented the linear regression EMG-handgrip force model to predict handgrip forces for the ergonomic evaluation of the use of hand tools. Loconsole et al. [56] developed an EMG-driven robotic hand exoskeleton for bilateral training of grasp motion in stroke, and used a multi-layer perceptron neural network to estimate the grasping force from the extracted EMG features. Yunus Ziya Arslan et al. [60] used a handlebar for experiments on isometric contraction and anisometric contraction to evaluate the relationship between grasping forces and sEMG signals, and ANN with back propagation were trained by using the higher order frequency moments of the signals. Ernest Nlandu et al. [61] investigated the use of features extracted from intramuscular electromyography (EMG) and an ANN for estimating grasping force in the ipsilateral and contralateral (mirrored) hand, during bilateral grasping tasks.

It is expected that advanced algorithms would satisfy the following requirements [39], [53]: little user training, high computational efficiency and also performing well with few electrodes. Those aspects are addressed in the present study by applying a robust variant of genetic programming, namely Gene Expression Programming (GEP), to produce simple explicit formulations with high accuracy and reduce the number of EMG features. Nevertheless, GEP-based handgrip force prediction from sEMG signals, to our best knowledge, has not been explored to date.

This paper is a continuation of our previous work, which developed an upper-limb power-assist exoskeleton that could be controlled by the user׳s motion intention in real time and augment arm performance based on EMG-angle model [17]. More specifically, our distinctive contributions are: (1) a new experiment protocol will be established to test 4 levels of percentage of maximum voluntary contraction (% MVC) and collect corresponding sEMG signals from forearm muscles; (2) one-way ANOVA with Tukey post hoc multiple comparisons test will be employed to extract amplitudes of sEMG signals as feature vectors from separate homogeneous subsets; (3) the EMG–force relationship will be investigated based on the accurate prediction model derived by GEP. In addition, the performance of this model will be compared to those of other models based on Back-Propagation Neural Network (BPNN) and Support Vector Machine (SVM).

Section snippets

Participants

Six healthy male graduate students (mean±SD, age=26.67±0.82 years, height=175.67±2.94 cm, weight=67.83±2.86 kg, and forearm length=43±1.26 cm) with no history of upper limb musculoskeletal and nervous diseases volunteered for this study. All participants were right handed. Before the experiment, they promised not to do any forearm or hand strenuous exercise.

Experimental protocol

The experiments included simultaneous measurements of handgrip forces and sEMG signals for 100%, 70%, 40% and 10% MVC. Participants had to

Results of handgrip force test

In our experiment, handgrip force values of 100%, 70%, 40% and 10% MVCs were collected respectively 120 times (6 subjects×20 replications×4 force levels). Table 1 shows descriptive statistics for all data at the 4 levels of handgrip force. All those force values were normalized by formula (1) and defined as the target variables for the prediction model.

Fig. 4 shows the randomly selected RMS values calculated from the rectified and filtered EMGs in time–distance graphs, which contain sEMG

Conclusions

This study was undertaken to verify whether a robust variant of GP, namely GEP could be derived to predict the handgrip force as a function of the integrated RMS indices of EMG signals and %MVCs. The experimental results are encouraging, as a rather simple mathematical formula was found by GEP with great accuracy. Furthermore, the derived model was benchmarked to compare the performance against the BPNN and SVM models. The proposed GEP model produces better outcomes than those two models, which

Acknowledgments

The authors would like to thank the participants of the experiment. This study was partly supported by the National Natural Science Foundation of China (No. 51305077), the Fundamental Research Funds for the Central Universities (No. CUSF-DH-D-2016068), the Zhejiang Provincial Key Laboratory of integration of healthy smart kitchen system (Nos. 2014E10014 and 2015F01), and the China Scholarship Council (CSC) (Nos. 201506630036 and 201506635030).

Zhongliang Yang received the Ph.D. degree in Computer Science from Zhejiang University, China, in 2012. He is an associate professor of the College of Mechanical Engineering at Donghua University, Shanghai, China, and is currently supported by China Scholarship Council as a postdoc researcher in the Engineering Design Centre, the Department of Engineering at the University of Cambridge, UK. His research interests include advanced exoskeleton control, signal processing of electromyography,

References (81)

  • P. Martí et al.

    Artificial neural networks vs. gene expression programming for estimating outlet dissolved oxygen in micro-irrigation sand filters fed with effluents

    Comput. Electron. Agric.

    (2013)
  • C. Castellini et al.

    Fine detection of grasp force and posture by amputees via surface electromyography

    J. Physiol.—Paris

    (2009)
  • G. Landeras et al.

    Comparison of gene expression programming with neuro-fuzzy and neural network computing techniques in estimating daily incoming solar radiation in the Basque country (northern Spain)

    Energy Convers. Manag.

    (2012)
  • F.M. Colacino et al.

    Subject-specific musculoskeletal parameters of wrist flexors and extensors estimated by an emg-driven musculoskeletal model

    Med. Eng. Phys.

    (2012)
  • K. Zhang et al.

    Web music emotion recognition based on higher effective gene expression programming

    Neurocomputing

    (2013)
  • J. Rafiee et al.

    Feature extraction of forearm emg signals for prosthetics

    Expert Syst. Appl.

    (2011)
  • A. Akl et al.

    A novel accelerometer-based gesture recognition system

    IEEE Trans. Signal Process.

    (2011)
  • S.S. Rautaray et al.

    Vision based hand gesture recognition for human computer interactiona survey

    Artif. Intell. Rev.

    (2015)
  • M.-B. Kaâniche, Gesture recognition from video sequences (Ph.D. thesis), Université Nice Sophia Antipolis,...
  • R. Radkowski, C. Stritzke, Interactive hand gesture-based assembly for augmented reality applications, in: The Fifth...
  • H. Hasan et al.

    Human-computer interaction using vision-based hand gesture recognition systemsa survey

    Neural Comput. Appl.

    (2014)
  • L. Bretzner, I. Laptev, T. Lindeberg, Hand gesture recognition using multi-scale colour features, hierarchical models...
  • H. Ren, subject-independent natural action recognition (Ph.D. thesis), Tsinghua University,...
  • W. Jian

    Some advances in the research of semg signal analysis and its application

    Sports Sci.

    (2000)
  • M.R. Ahsan et al.

    Emg signal classification for human computer interactiona review

    Eur. J. Sci. Res.

    (2009)
  • A. Chowdhury, R. Ramadas, S. Karmakar, Muscle computer interface: a review, in: ICoRD׳13, Springer, India, 2013, pp....
  • X. Chen, X. Zhang, Z.-Y. Zhao, J.-H. Yang, V. Lantz, K.-Q. Wang, Multiple hand gesture recognition based on surface emg...
  • A.L. Fougner et al.

    System training and assessment in simultaneous proportional myoelectric prosthesis control

    J. Neuroeng. Rehabil.

    (2014)
  • Y. Li, X. Chen, J. Tian, X. Zhang, K. Wang, J. Yang, Automatic recognition of sign language subwords based on portable...
  • V.E. Kosmidou, L.J. Hadjileontiadis, S. Panas, Evaluation of surface emg features for the recognition of American sign...
  • Z. Tang et al.

    An upper-limb power-assist exoskeleton using proportional myoelectric control

    Sensors

    (2014)
  • A. Young et al.

    Analysis of using emg and mechanical sensors to enhance intent recognition in powered lower limb prostheses

    J. Neural Eng.

    (2014)
  • Y. Zhao, Human emotion recognition from body language of the head using soft computing techniques (Ph.D. thesis),...
  • Y. Chen et al.

    An semg-based attitude recognition method of nodding and head-shaking for interactive optimization

    J. Comput. Inf. Syst.

    (2014)
  • A.J. Fridlund et al.

    Pattern recognition of self-reported emotional state from multiple-site facial emg activity during affective imagery

    Psychophysiology

    (1984)
  • Y. Chen, Z. Yang, J. Wang, Eyebrow emotional expression recognition using surface emg signals, Neurocomputing 168,...
  • X. Zhang, X. Chen, W.-h. Wang, J.-h. Yang, V. Lantz, K.-q. Wang, Hand gesture recognition and virtual game control...
  • K.R. Wheeler et al.

    Gestures as inputneuroelectric joysticks and keyboards

    IEEE Pervasive Comput.

    (2003)
  • G.R. Naik, D.K. Kumar, H. Weghorn, M. Palaniswami, Subtle hand gesture identification for hci using temporal...
  • X. Zhang, Body gesture recognition and interaction based on surface electromyogram (Ph.D. thesis), University of...
  • Cited by (0)

    Zhongliang Yang received the Ph.D. degree in Computer Science from Zhejiang University, China, in 2012. He is an associate professor of the College of Mechanical Engineering at Donghua University, Shanghai, China, and is currently supported by China Scholarship Council as a postdoc researcher in the Engineering Design Centre, the Department of Engineering at the University of Cambridge, UK. His research interests include advanced exoskeleton control, signal processing of electromyography, wearable intelligent system, bio-inspired design and computing. He is a member of IEEE and ACM. He has published many papers in various journals and conference proceedings.

    Yumiao Chen is now a Ph.D. candidate in the Fashion Institute of Donghua University, majoring in Apparel design and engineering. She is currently supported by China Scholarship Council as a joint doctoral student in the School of Materials, the University of Manchester, UK. She thought that the most interesting scientific research lies in the field of intelligent wearable systems and smart clothing. There is rich possibility to make clothing become much more intelligent and smart. At present, she is focusing on exploring muscle–computer and brain–computer interactive methods in HCI as a new intelligent wearable interactive system to replace traditional mouse and keyboard, and she is also interested in fashion design, industrial design, and interaction design.

    Zhichuan Tang received the Ph.D. degree at the College of Computer Science, Zhejiang University, Hangzhou, China, in 2014. He was an Erusmus Mundus Scholar under cLINK Program from 2014 to 2015 with Faculty of Science and Technology, Bournemouth University, UK, and is currently a postdoc research fellow with Modern Industrial Design Institute, Zhejiang University, China. His research interests include applied ergonomics, advanced exoskeleton control, signal processing of electromyography, human-computer interaction and industrial design.

    Jianping Wang received the B.S. degree in textile engineering from China Textile University, Shanghai, China, in 1984, and M.S. degree and Ph.D. degree in fashion design and engineering from Donghua University, Shanghai, China, in 2001 and 2007. She is a professor, supervisor of Ph.D. candidate of Fashion Institute of Donghua University, Shanghai, China. Her research interests include advanced garment manufacture engineering, digital garment design, and apparel science research of human body.

    View full text