Self-attention and asymmetric multi-layer perceptron-gated recurrent unit blocks for protein secondary structure prediction
Penulis/Author
DEWI PRAMUDI ISMI (1); Prof. Dr.-Ing. Mhd. Reza M. I. Pulungan, S.Si., M.Sc. (2); Afiahayati, S.Kom., M.Cs., Ph.D (3)
Tanggal/Date
2024
Kata Kunci/Keyword
Abstrak/Abstract
Protein secondary structure prediction (PSSP) is one of the most prominent and widely-conducted tasks in Bioinformatics. Deep neural networks have become the primary methods for building PSSP models in the last decade due to their potential to enhance PSSP performances. However, there is room for improvement in PSSP as previous studies have yet to reach the theoretical limit of PSSP model performance. In this work, we propose a PSSP model called SADGRU-SS, which is built with a novel and unique deep learning architecture that utilizes self-attention, asymmetric multi-layer perceptron (MLP)-gated recurrent unit (GRU) blocks, and a dense block for solving the PSSP problem. Our experiment results show that using self-attention in the SADGRU- SS architecture has successfully increased SADGRU-SS performance. Moreover, installing self-attention in the frontmost position of the networks produces better performance than locating it in other positions. Using the asymmetric configuration in the MLP-GRU blocks results in more excellent performance than the symmetric ones. Our model is trained using the standard CB6133-filtered dataset. We evaluate the performance of our model using the standard CB513 test dataset. Our experiment shows that the performance of our model on 8-state PSSP outstands other PSSP models. The model achieves 70.74% and 82.78% prediction accuracy in the 8-state and 3-state PSSP, respectively.