|
Abstract: |
Sentiment analysis, a cornerstone of natural language processing, has witnessed remarkable advancements driven by deep learning models which demonstrated impressive accuracy in discerning sentiment from text across various domains. However, the deployment of such models in resource-constrained environments presents a unique set of challenges that require innovative solutions. Resource-constrained environments encompass scenarios where computing resources, memory, and energy availability are restricted. To empower sentiment analysis in resource-constrained environments,we address the crucial need by leveraging lightweight pre-trained models. These models, derived from popular architectures such as DistilBERT, MobileBERT, ALBERT, TinyBERT, ELECTRA, and SqueezeBERT, offer a promising solution to the resource limitations imposed by these environments. By distilling the knowledge from larger models into smaller ones and employing various optimization techniques, these lightweight models aim to strike a balance between performance and resource efficiency. This paper endeavors to explore the performance of multiple lightweight pre-trained models in sentiment analysis tasks specific to such environments and provide insights into their viability for practical deployment. |
Key words: sentiment analysis light weight models resource-constrained environment pre-trained models |
DOI:10.11916/j.issn.1005-9113.2023103 |
Clc Number:TP183 |
Fund: |