Page 57 - The-5th-MCAIT2021-eProceeding
P. 57
4. Conclusion
This paper has presented an adjusted BERT architecture for the AES task. Such an adjustment has been
depicted by the unfreezing mechanism in which the learning rates of the latter hidden layers of the BERT’s
fine-tuning part are being gradually incremented. Such increment would contribute toward fit the learning
model to the AES task. For future direction, the experimental results acquired by the proposed adjustment would
have a valuable outcome in terms of examining the capabilities of BERT for the AES task.
Acknowledgements
This publication was supported by the Universiti Kebangsaan Malaysia (UKM) under GGP-2020-041.
References
Chen, Z., & Zhou, Y. (2019, 25-28 May 2019). Research on Automatic Essay Scoring of Composition Based
on CNN and OR. Paper presented at the 2019 2nd International Conference on Artificial Intelligence and Big
Data (ICAIBD).
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers
for language understanding. arXiv preprint arXiv:1810.04805.
Hendre, M., Mukherjee, P., Preet, R., & Godse, M. (2020). Efficacy of Deep Neural Embeddings based
Semantic Similarity in Automatic Essay Evaluation. International Journal of Computing and Digital Systems,
9, 1-11.
Howard, J., & Ruder, S. (2018). Universal language model fine-tuning for text classification. arXiv preprint
arXiv:1801.06146.
Kyle, K. (2020). The relationship between features of source text use and integrated writing quality. Assessing
Writing, 45, 100467. doi:https://doi.org/10.1016/j.asw.2020.100467
Lehečka, J., Švec, J., Ircing, P., & Šmídl, L. (2020). Adjusting BERT’s Pooling Layer for Large-Scale Multi-
Label Text Classification, Cham.
Li, X., Chen, M., & Nie, J.-Y. (2020). SEDNN: Shared and enhanced deep neural network model for cross-
prompt automated essay scoring. Knowledge-Based Systems, 210, 106491.
doi:https://doi.org/10.1016/j.knosys.2020.106491
Li, X., Chen, M., Nie, J., Liu, Z., Feng, Z., & Cai, Y. (2018). Coherence-Based Automated Essay Scoring Using
Self-attention Chinese Computational Linguistics and Natural Language Processing Based on Naturally
Annotated Big Data (pp. 386-397): Springer.
Liu, T., Ding, W., Wang, Z., Tang, J., Huang, G. Y., & Liu, Z. (2019). Automatic short answer grading via
multiway attention networks. Paper presented at the International Conference on Artificial Intelligence in
Education.
Mayfield, E., & Black, A. W. (2020). Should You Fine-Tune BERT for Automated Essay Scoring? Paper
presented at the Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational
Applications.
Rodriguez, P. U., Jafari, A., & Ormerod, C. M. (2019). Language models and Automated Essay Scoring. arXiv
preprint arXiv:1909.09482.
Tashu, T. M. (2020, 3-5 Feb. 2020). Off-Topic Essay Detection Using C-BGRU Siamese. Paper presented at the
2020 IEEE 14th International Conference on Semantic Computing (ICSC).
Valenti, S., Neri, F., & Cucchiarelli, A. (2003). An overview of current research on automated essay grading.
Journal of Information Technology Education: Research, 2(1), 319-330.
Wang, Z., Liu, J., & Dong, R. (2018, 23-25 Nov. 2018). Intelligent Auto-grading System. Paper presented at
the 2018 5th IEEE International Conference on Cloud Computing and Intelligence Systems (CCIS).
Zhang, H., & Litman, D. (2019). Co-attention based neural network for source-dependent essay scoring. arXiv
preprint arXiv:1908.01993.
E- Proceedings of The 5th International Multi-Conference on Artificial Intelligence Technology (MCAIT 2021) [44]
Artificial Intelligence in the 4th Industrial Revolution