A Multi-Task Rumor Detection Framework with Emotion-Awareness and Parameter-Efficient Fine-Tuning
DOI: https://doi.org/10.62517/jbdc.202601105
Author(s)
Jiahang Fan
Affiliation(s)
School of Information and Engineering, Software Engineering Major, Yanshan University (YSU), Qinhuangdao, China
Abstract
The rapid spread of rumors on social media poses a serious threat to public safety. Existing rumor detection methods have insufficient modeling of the relationship between emotion and rumor propagation, and their generalization ability is limited in cross-domain scenarios. This paper proposes a multi-task rumor detection framework that integrates emotion awareness and parameter-efficient fine-tuning. It jointly models rumor classification and sentiment analysis tasks through a shared encoder, and synergistically introduces Prefix-Tuning and Prompt Learning to achieve parameter-efficient fine-tuning. To address the problem of emotional pseudo-correlation, an adaptive weight mechanism α(s) driven by emotional intensity is designed to dynamically adjust the multi-task loss weight according to the sample-level emotional intensity. Supervised Contrastive Learning is introduced to construct a "semantic-emotional-contrastive" triple-constrained objective, which pulls together similar samples and pushes apart dissimilar ones in the joint representation space, thereby suppressing pseudo-correlation and improving the discriminative ability for difficult samples. A two-stage curriculum learning strategy is adopted, combined with stabilization techniques such as mixed-precision training and gradient clipping, to ensure training convergence. Experiments on the Twitter15 and Twitter16 datasets show that, compared with strong baselines such as BERT and RoBERTa, this method improves Macro-F1 and accuracy by 3.5% and 3.4% respectively, while only requiring 0.3M trainable parameters (0.27% of full fine-tuning), reducing memory usage by 53.7% and training time by 44.4%. Ablation studies verify the effectiveness of each component, with the introduction of the sentiment task contributing a 1.8% improvement and the adaptive weight mechanism contributing a 0.8% improvement. This research provides a new approach for efficient rumor detection in resource-constrained scenarios.
Keywords
Rumor Detection; Multi-Task Learning; Sentiment Analysis; Prefix-Tuning; Prompt Learning; Parameter-Efficient Fine-Tuning; Contrastive Learning; Cross-Domain Robustness
References
[1] Ma, J., Gao, W., Wei, Z., Lu, Y., & Wong, K. F. (2015). Detecting rumors from microblogs with recurrent neural networks. Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI), 3818–3824.
[2] Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151.
[3] Bian, T., Xiao, X., Xu, T., Zhao, P., Huang, W., Rong, Y., & Huang, J. (2020). Rumor detection on social media with bi-directional graph convolutional networks. Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 549–556.
[4] Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. NAACL, 4171–4186.
[5] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., … & Stoyanov, V. (2019). RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.
[6] Li, X. L., & Liang, P. (2021). Prefix-tuning: Optimizing continuous prompts for generation. ACL, 4582–4597.
[7] Lester, B., Al-Rfou, R., & Constant, N. (2021). The power of scale for parameter-efficient prompt tuning. EMNLP, 3045–3059.
[8] Houlsby, N., Giurgiu, A., Jastrzebski, S., Morrone, B., De Laroussilhe, Q., Gesmundo, A., … & Gelly, S. (2019). Parameter-efficient transfer learning for NLP. ICML, 2790–2799.
[9] Caruana, R. (1997). Multitask learning. Machine Learning, 28(1), 41–75.
[10] Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., … & Krishnan, D. (2020). Supervised contrastive learning. NeurIPS, 33, 18661–18673.
[11] Fan, R., Zhao, J., Chen, Y., & Xu, K. (2013). Anger is more influential than joy: Sentiment correlation in Weibo. *arXiv preprint arXiv:1309.2402*.
[12] Ghanem, B., Rosso, P., & Rangel, F. (2021). FakeFlow: Fake news detection by modeling the flow of affective information.*arXiv preprint arXiv:2101.09810*.
[13] Zhou, X., & Zafarani, R. (2018). Fake news: A survey of research, detection methods, and opportunities. *arXiv preprint arXiv:1812.00315*.
[14] Chakraborty, P., Mittal, S., Gupta, M. S., Maheshwari, S., & Kumaraguru, P. (2022). ESADN: Emotion-guided and stance-aware attention network for fake news detection.*arXiv preprint arXiv:2211.17108*.