Next Article in Journal
A Meta Reinforcement Learning Approach for SFC Placement in Dynamic IoT-MEC Networks
Previous Article in Journal
Monitoring the Microbiomes of Agricultural and Food Waste Treating Biogas Plants over a One-Year Period
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Bi-Dual Inference Approach for Detecting Six-Element Emotions

School of Computer and Software Engineering, Xihua University, Chengdu 610039, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(17), 9957; https://doi.org/10.3390/app13179957
Submission received: 15 July 2023 / Revised: 23 August 2023 / Accepted: 31 August 2023 / Published: 3 September 2023

Abstract

:
In recent years, there has been rapid development in machine learning for solving artificial intelligence tasks in various fields, including translation, speech, and image processing. These AI tasks are often interconnected rather than independent. One specific type of relationship is known as structural duality, which exists between multiple pairs of artificial intelligence tasks. The concept of dual learning has gained significant attention in the fields of machine learning, computer vision, and natural language processing. Dual learning involves using primitive tasks (mapping from domains X to Y) and dual tasks (mapping from domains Y to X) to enhance the performance of both tasks. In this study, we propose a general framework called Bi-Dual Inference by combining the principles of dual inference and dual learning. Our framework generates multiple dual models and a primal model by utilizing two dual tasks: sentiment analysis of input text and sentence generation of sentiment labels. We create these model pairs (primal model f, dual model g) by employing different initialization seeds and data access sequences. Each primal and dual model is considered as a distinct LSTM model. By reasoning about a single task with multiple similar models in the same direction, our framework achieves improved classification results. To validate the effectiveness of our proposed model, we conduct experiments on two datasets, namely NLPCC2013 and NLPCC2014. The results demonstrate that our model outperforms the optimal baseline model in terms of the F1 score, achieving an improvement of approximately 5%. Additionally, we provide parameter values for our proposed model, including model iteration analysis,  α  parameter analysis,  λ  parameter analysis, batch size analysis, training sentence length analysis, and hidden layer size setting. These experimental results further confirm the effectiveness of our proposed model.

1. Introduction

In recent years, the rapid development of the Internet, cloud computing, and artificial intelligence technology has led to the emergence of data-intensive research, often referred to as the “fourth paradigm” of scientific discovery. In today’s society, influenced by the prevalence of big data, information surrounds us. One key characteristic of the information society is the exponential growth of data generated on a daily basis. Over the past decade, the number of internet users has consistently increased, highlighting the internet’s significance in people’s lives. Moreover, users have transitioned from passive recipients of information to active producers, as they contribute to the vast amount of information available online. With the presence of social networks, which contain large user bases and exhibit “small world” characteristics, news and discussions among netizens can quickly spread and form network public opinion. In the real world, people engage in social activities to communicate with each other. The emergence of online social platforms like Weibo, Facebook, and Twitter has expanded the scope of people’s social interactions, blurring the boundaries between their real and virtual lives. Consequently, individuals now regularly share their preferences, interests, opinions, and even their thoughts on movies or other topics on the internet. However, it is important to note that users do not publish content on social networks without reason. Each comment naturally expresses the user’s emotions or attitudes, such as happiness, sadness, anger, fear, disgust, and surprise. For instance, when someone posts a group photo with a friend to celebrate a birthday, it expresses a feeling of happiness. On the other hand, if a recently purchased item does not match the description displayed on the shopping platform, the comments will likely exhibit anger. As a result, social platforms are filled with people’s opinions and comments on various products, celebrities, news events, and more. These comments can have a significant impact on subsequent users, shaping public opinion and providing valuable information and commercial insights. Therefore, accurately identifying the emotions expressed in user comments, understanding the public’s attitudes and reactions to specific events, and effectively intervening and guiding based on this information are crucial for the stable development of society.
Emotions play a significant role in various aspects of our lives, including personal well-being, academic performance, professional success, physical and mental health, and more. On a personal level, emotions can impact our well-being via physical exhaustion, psychological anxiety, and behavioral patterns. At an organizational level, emotions can influence the decision-making and management behavior of leaders. Interpersonally, positive emotions foster harmony, while negative emotions can lead to conflict. Emotions also bear implications for broader social dynamics, such as social image and evaluation. Analyzing the emotional tone of user comments on social platforms is crucial in understanding public sentiment. By examining online discussions, we can gain insights into citizens’ opinions on pressing issues, aiding the government in better serving the people and managing public sentiment. Additionally, extracting and analyzing evaluations and sentiments can assist consumers in making informed choices about products and businesses. When people decide to dine at a restaurant, they often rely on online platforms like Meituan and Public Comment to check previous reviews and dish recommendations. Additionally, after their meal, they can share their own evaluation of the restaurant on these platforms, providing assistance to future customers. From the perspective of restaurant owners, they can improve their establishment’s image and services based on online evaluations, aiming to attract more customers and generate higher profits in the future. Artificial intelligence robots can also analyze users’ psychological states and provide them with psychological comfort, serving as emotional companions or outlets for venting emotions. This can be beneficial for the treatment of individuals with psychological conditions.
The study of emotions in social networks holds great importance and value for various applications in personal development, business marketing, administrative science, and even politics. Most of the current research categorizes users’ emotions into broad categories such as positive, negative, and neutral, with limited studies focusing on more nuanced classifications of emotions in Chinese short texts. Therefore, there is an urgent need to analyze user emotions in real-time. In summary, our research objective is to swiftly and accurately identify user emotions by analyzing emotional information in diverse texts. This analysis can serve as a valuable tool in guiding public opinion, providing product recommendations, offering psychotherapy support, and other related fields.

2. Related Work

There are numerous methods available for sentiment analysis, which can be classified into five main groups: sentiment analysis based on sentiment lexicon, sentiment analysis based on machine learning, sentiment analysis based on deep learning, sentiment analysis based on multi-strategy mixture, and sentiment analysis based on pairwise learning.

2.1. Sentiment-Lexicon-Based Sentiment Analysis

The sentiment dictionary-based analysis method relies on the construction of sentiment dictionaries as its core technology. This method involves extracting emotion words from input texts or sentences using techniques like word cutting and word splitting. These extracted words are then matched and compared with existing words in the emotion dictionary to determine the emotional tendency of the input text or sentence. Early sentiment dictionaries typically included only three categories of sentiment words: positive, negative, and neutral. One of the earliest sentiment dictionaries is the SentiWordNet sentiment dictionary, constructed by Baccianella et al. [1]. This sentiment dictionary assigns sentiment polarity scores to each word to reflect the user’s sentiment attitude. Desmet et al. [2] proposed that dictionaries and semantic features can effectively represent information in text data, and they used this as the basis for building an emotion detection system. Cai et al. [3] constructed a three-layer sentiment dictionary that combines sentiment words, sentiment objects, and sentiment polarity. By utilizing this combination, the researchers were able to reduce the ambiguity of multiple meanings and achieve a more detailed sentiment analysis, resulting in positive outcomes. Anil et al. [4] expanded on Cai’s work by generating domain-specific sentiment dictionaries and applying them to sentiment feature extraction, thereby enhancing the accuracy of sentiment analysis. Taboada et al. [5] utilized lexical dictionaries with semantic directions, such as polarity and intensity, through the use of the semantic orientation calculator (SO-CAL). This approach considered aspects like reinforcement and negation to identify the overall sentiment tendencies of the text. With the rapid emergence of new words, Wu et al. [6] employed co-occurrence information of sentiment words and sentence sentiment labels to construct a sentiment lexicon. They expressed sentiment categories based on average sentiment intensity and successfully performed sentiment classification tasks.

2.2. Machine-Learning-Based Sentiment Analysis

The machine-learning-based approach involves training sentiment classifiers using both labeled and unlabeled data. This approach utilizes machine learning algorithms for feature extraction and predicts the sentiment polarity of the input. It can be broadly categorized into three types: supervised machine learning, semi-supervised machine learning, and unsupervised machine learning.
Supervised learning algorithms typically require a substantial amount of manually labeled data to train the model. Common supervised machine learning algorithms include Support Vector Machine (SVM) [7,8], Naive Bayesian (NB) [9], and K-Nearest Neighbors (KNN) [10,11]. In a comparative analysis by Wawre et al. [12], the NB algorithm was found to outperform the SVM algorithm when training datasets had larger data sizes. Kaur et al. [13] utilized the KNN algorithm to classify input sentences into positive, negative, and neutral categories. They employed the N-gram algorithm for feature extraction and achieved better classification performance results compared to the SVM algorithm. Pang et al. [14] compared the performance of different machine learning algorithms for sentiment classification on a movie review dataset, achieving a maximum accuracy of 82.9%. Rezwanul et al. [15] employed the KNN algorithm to measure the Euclidean distance of the input text, enabling successful classification.
After applying the SVM algorithm to determine the sentiment labels of the input text, it was found that the KNN algorithm outperforms SVM in sentiment analysis due to its lower experimental dimensions. Xue et al. [16] utilized the Latent Dirichlet Allocation Model (LDA), a machine learning algorithm, to analyze discussions related to the novel coronavirus. This approach helped identify the topics discussed and the sentiments expressed within those topics. Liu et al. [17] employed an integrated-learning-based approach and the Synthetic Minority Oversampling Technique (SMOTE2) to extract features from sentiment words. Their final experiment yielded optimal results on the dataset. Fei et al. [18] proposed the Emoticon Space Model (ESM), which utilizes a larger number of emoticons to construct word representations from a vast amount of unlabeled data. This model enables and enhances sentiment analysis by projecting words and microblog posts into the emoticon space. Unsupervised learning methods do not require a large amount of manually labeled data, but their accuracy is generally lower.

2.3. Deep-Learning-Based Sentiment Analysis

In contrast to machine learning, deep learning provides the ability to automatically extract primary features and combine them into high-level features before conducting sentiment analysis. Deep-learning-based sentiment analysis can be broadly categorized into four categories: single neural network-based, hybrid neural network-based, attention mechanism-based, and the use of pre-trained models. Gopalakrishnan et al. [19] propose six streamlined versions of LSTM models with parameters, known as slim LSTM, and evaluate two of them on a dataset to demonstrate how different parameter settings affect the standard LSTM model. Ahmed et al. [20] propose a weakly supervised neural network model and an attention-based LSTM model. They reduce the weight of non-emotional information in a sentence, improving the accurate determination of emotional polarity. REN et al. [21] propose a new recursive neural network algorithm that combines network parameters, achieving better emotion classification while being computationally more efficient than bidirectional recurrent neural networks. Himeno et al. [22] utilize a convolutional neural network to classify the intensity of emotions in tweets. Can et al. [23] contruct a single model using the language with the largest dataset and propose a novel RNN-based framework for sentiment analysis of small languages, which experimentally yields better results. In terms of hybrid neural networks, Xing et al. [24] introduce a novel parametric convolutional neural network through a study on aspect-level emotion classification, achieving improved classification results on the dataset through experimental evaluation. The proposed model not only effectively addresses the issues of gradient burst and disappearance, but also excels at extracting contextual information from sentences. Wei et al. [25] introduce an LSTM model with a multipolar orthogonal attention mechanism, which successfully captures the nuanced features between words and sentiment orientations, resulting in improved classification results for implicit sentiment analysis. Mikolov et al. [26] propose the CBOW (Continuous Bag-of-Words) model that predicts the lexical properties of sentiment words based on context. They also propose the Skip-gram model, which utilizes sentiment words to predict the lexical properties of surrounding words, achieving fine-grained classification of sentiment texts. Li et al. [27] propose a neural network that incorporates a multi-attention mechanism to analyze sentiment reasons. This model effectively identifies the underlying factors contributing to sentiment expressions.

2.4. Multi-Strategy Mixture-Based Sentiment Analysis

In situations where extensive data are not available, sentiment dictionaries can still be effective in analyzing detailed text. However, determining the sentiment polarity solely based on sentiment values can be challenging. In terms of feature extraction, combining machine learning and deep learning techniques has shown promising results, although it requires a substantial amount of training data. Consequently, in recent years, there has been a rise in approaches that blend multiple learning methods for sentiment analysis. Bruyne et al. [28] propose a novel sentiment classification system that integrates lexical, n-gram, sentence style, syntactic, and semantic features. This is achieved by combining 11 binary classifiers in a chain of classifiers, where each model incorporates the predictions of the previous models as additional features. Ultimately, the predicted labels are combined to address a multi-label classification problem. Turcu et al. [29] conduct sentiment analysis of tweets by integrating supervised machine learning models such as NB, KNN, and SVM, along with Tensorflow and decision trees. This integration of various models allows for a comprehensive analysis of sentiment in tweets. Rajabi et al. [30] introduce a multi-channel, multi-filter CNN-BiLSTM model for sentiment analysis. Lu et al. [31] propose a sentiment analysis method for movie review texts, combining SVM classification with machine learning and sentiment dictionary. Chen et al. [32] develop an independent military sentiment dictionary and combine it with deep learning models with different parameters to improve classification performance. Keshtkar et al. [33] propose a novel approach that utilizes possible sentiment levels to achieve better results compared to standard machine learning methods. Mukwazvure et al. [34] employ a sentiment dictionary for polarity detection and then train SVM and KNN machine learning algorithms for sentiment analysis of a news comment dataset. Esmin et al. [35] utilize the HC flattening classification method to categorize Twitter text into a three-level structure and perform sentiment analysis by combining it with a MulticlassSVM classifier.

2.5. Dual-Learning-Based Sentiment Analysis

In 2016, pairwise learning emerged as a novel machine learning approach to address the heavy reliance of deep learning on large labeled datasets. This approach leverages the structural duality between AI tasks, allowing for learning from limited or insufficient data. Xia et al. [36] discovered that the intrinsic connection in training two models had not been effectively utilized. To address this, they proposed the DSL framework (pairwise supervised model) for simultaneous and explicit training of two models. By exploiting the probabilistic correlation between the models, the training process is regulated. Experimental results demonstrate that DSL outperforms the baseline model in sentiment analysis. Building upon the DSL framework, Bian et al. [37] applied their discovery of pairwise structural relationships to the inference phase of AI tasks. They proposed a general framework for pairwise inference, enabling pairwise inference on a single task using two existing models without the need for retraining. Experiments show that pairwise inference significantly improves the performance of sentiment analysis. Liu et al. [38] proposed a new learning framework that considers the pairwise nature of the task when designing the architecture of the original and pairwise models. They establish links between model parameters that play similar roles in the two tasks, ultimately achieving better sentiment analysis results.

3. Bi-Dual Inference for Detecting Emotions

Definition 1.
(structural duality): Structural duality refers to the relationship between two machine learning tasks where one task maps from space X to space Y, while the other task maps from space Y to space X. This concept highlights the reciprocal nature of the two tasks and their interconnectedness.
Definition 2.
(primal and dual models): When two tasks exhibit duality, we refer to the task mapping from space X to space Y as the primal task or forward task, and its corresponding model as the primal model or forward model, denoted as f. Similarly, the task mapping from space Y to space X is referred to as the dual task or backward task, with its corresponding model denoted as the dual model or backward model, denoted as g.
In this section, we present the mathematical formulation of the Bi-dual Inference framework. Specifically, sentiment classification and sentence generation are considered as dual forms of artificial intelligence tasks. Sentiment classification aims to classify the polarity of a given natural language sentence, while the dual task focuses on automatically generating sentences with specific emotional polarity categories. Let us consider two domains, denoted as X and Y. In this paper, X represents sentence generation, and Y represents polarity labeling. We define  D x  as the collection of training data from X to Y, and  D y  as the collection of training data from Y to X. It should be noted that  D x  is a subset of X, and  D y  is a subset of Y. Our objective is to learn two agents,  f : X Y  and  g : Y X , which represent the models for the primal task and dual task respectively. We introduce the mapping  Δ x x , x  from  X × X  to  R , which represents the dual reconstruction error between x and  x , also known as the feedback signal. Here, x and  x  are elements of X. Similarly, we have the mapping  Δ y y , y  that is analogous to  Δ x x , x , where y and  y  are elements of Y.

3.1. Primal Model Construction

To enhance the primal task model f, we introduce a classical LSTM-based emotion classification method that utilizes a long short-term memory network [39]. This approach involves supervised learning within the previous sequence-to-sequence learning framework. The methodology employs an LSTM model to sequentially process the input sequence, encoding it word-by-word according to Equations (2)–(6). This process allows for the extraction of a fixed-dimensional vector representation. Subsequently, another LSTM model is employed to extract the output from the vector sequence. This second LSTM model functions as a recursive neural network language model, adapting to the input sequence. The modified sequence-to-sequence learning model, as depicted in Figure 1, aims to reconstruct the input sequence itself by replacing it with the output sequence within the sequence framework. In this sequence auto-encoder, a recurrent network reads the input sequence, generates a hidden state, and reconstructs the original sequence. Notably, the decoder and encoder networks share the same weights, as illustrated in Figure 1.
In the context of LSTM, our objective is to estimate the conditional probability  p y t | v , y 1 , , y t 1 . Here,  v = x 1 , , x T  represents the input sequence, and  y 1 , , y T  denotes the corresponding output sequence, which may have a different length  T  compared to T. The LSTM achieves this by first obtaining a fixed-dimensional representation v of the input sequence  x 1 , , x T , which is derived from the last hidden state of the LSTM. Subsequently, the LSTM computes the probability of  y 1 , , y T  using a standard LSTM formulation, where the initial hidden state is set to the representation  v = x 1 , , x T .
p y 1 , , y T | x 1 , , x T = Π T t = 1 p y t | v , y 1 , , y t 1 .
In Equation (1), each conditional probability distribution  p y t | v , y 1 , , y t 1  is represented using a softmax function over all words in the vocabulary. The cyclic network of the automatic sequence encoder is employed to process the input sequence and generate a hidden state, which is then utilized to reconstruct the original sequence. In other words, by combining Equation (1) with the basic LSTM formulas (Equations (2)–(6)), the LSTM model first reads the input sentence “ W X Y Z ” to calculate the vector representations of “W”, “X”, “Y”, “Z”, and “eos”. Here, “eos” is a special symbol denoting the end of the sentence, allowing the model to define distributions for sentences of varying lengths. Subsequently, the obtained vector representations are used to calculate the probabilities of “ W ”, “ X ”, “ Y ”, and “ Z ”, resulting in the generation of a new sentence “ W X Y Z ”. This generated sentence is then read into the hidden state as the input sequence for the next step, facilitating the reconstruction of the original sequence. Notably, LSTM reads the input statements in reverse order, which enhances the modeling of short-term dependencies and simplifies the optimization problem.
i t = σ W x i x t + W h i h t 1 + W c i c t 1 + b i
f t = σ W x f x t + W h f h t 1 + W c f c t 1 + b f
c t = f t c t 1 + i t tanh W x c x t + W h c h t 1 + b c
o t = σ W x o x t + W h o h t 1 + W c o c t 1 + b o
h t = o t tanh c t .
In the given equation, the logistic sigmoid function  σ  is used, and the vectors  i t f t o t , and  c t  represent the input gate, forget gate, output gate, and cell activation vectors, respectively. These vectors are of the same size as the hidden vector h. The weight matrix subscripts indicate their purpose, such as  W h i  representing the hidden-input gate matrix and  W x o  representing the input–output gate matrix. The weight matrices connecting the cell to the gate vectors (e.g.,  W c i ) are diagonal, meaning that each element in the gate vector only receives input from the corresponding element in the cell vector.

3.2. Dual Model Construction

In this paper, we draw inspiration from recursive language models that utilize LSTM [40]. Based on this, we develop a sentence generation model, as depicted in Figure 2.
In this context, we first project two emotion labels into an emotion embedding of a specific size. These labels represent different ways of expressing the same emotion. For instance, “happy” can be expressed as “joy” or “gladness”. All emotion labels belong to the set P, and they serve as input to the LSTM unit. In other words, let x represent the sentence, y represent the emotion label, and  X t  represent the word at a particular time step. The LSTM cell takes  W w E w x t 1 + W s E s y  as input, where W represents the connection between the embedding matrix and the LSTM cell, and  E w  and  E s  represent the embedding matrices for the word and emotion label, respectively. The parameters E and W are learned during training. We calculate the dependency  r t  between sentences by generating the interaction between the emotion label set P and the memory cell  C t  using Equation (7).
r t = σ W r W p p + W r c t + b r .
Since word by word in order generates a sentence, we can obtain a controlled context vector A by using Equation (7), Equation (8) can calculate the output layer  h t  of the LSTM.
h t = o t tanh c t + r t W p P

3.3. Bi-Dual Inference Construction

In this subsection, we will provide an explanation of how the above two models (dual model and primal model) are jointly trained. We establish a standard dual learning loss function [41] for a single primal model  f 0  and a single dual model  g 0 , as illustrated in Equation (9).
d u a l = 1 M x x D x Δ x x , g f x + 1 M y y D y Δ y y , f g x .
D x  and  D y  refer to the data collected from domains X and Y, respectively. In this context, X represents sentiment-laden sentence data, while Y represents sentiment polarity labels. It is worth noting that  D x  is a subset of  X  and  D y  is a subset of  Y . The quantities  | M x |  and  | M y |  represent the number of samples in the datasets  D x  and  D y , respectively.
Taking inspiration from pairwise inference [37], if the inference of the original model or the pairwise model relies on the most natural and direct approach, we can train the loss functions of the original and pairwise tasks together by combining them. By minimizing the loss of the combined functions, we consider the output as the inference result. Specifically, for the original task and the dyadic task, their inference methods are given by Equations (10) and (11), respectively.
f d u a l x = a r g min y Y α f x , y + 1 α g x , y
g d u a l y = a r g min x X β f x , y + 1 β g x , y .
In this context, the hyperparameters  α  and  β  are used to balance the trade-off between the two losses, and their values will be adjusted based on the performance on the validation set. The loss functions for the single original model and the pairwise model are denoted as  f  and  g , respectively. As sentiment analysis is considered a multi-categorical problem in this paper, we utilize the negative log-likelihood function as the loss function, as shown in Equations (12) and (13).
f x , y = x , y log P y | x ; f
g x , y = x , y log P x | y ; g
As the loss functions  f  and  g  are negative log-likelihood functions, their values range from 0 to infinity. The closer their values are to zero, the better the trained original and pairwise models are considered to represent the data. Many commonly used inference rules in machine learning tasks can be described using Equations (14) and (15).
f x = a r g m i n y Y f x , y
g y = a r g m i n x X g x , y .
By setting  α  and  β  to one, we can consider the extreme cases in dual inference. From this perspective, we perceive dual inference as a more comprehensive framework for inference.
It is important to note that the dual inference approach studied by Xia et al. [37] does not involve retraining or making any modifications to the model for the original and pairwise tasks. However, in this paper, due to the presence of multiple f and g, multiple training processes are required. To address this, we utilize the probabilistic duality method [36] to train the multiple f and g. The core algorithm is presented in Equations (16) and (17).
G f = θ x y 1 / m j = 1 m 1 f x j ; θ x y , y j + λ x y d u a l x j , y j ; θ x y , θ y x
G g = θ y x 1 / m j = 1 m 2 g y j ; θ y x , x j + λ y x d u a l x j , y j ; θ x y , θ y x .
In the context of this discussion, m represents the total number of training sample pairs  x , y . The Lagrangian parameters  λ x y  and  λ y x , along with the parameters  θ x y  and  θ y x , are to be trained in the primal model and the dual model, respectively.
If we consider f and g in Equations (10) and (11) as  f 0  and  g 0 , respectively, the application of different training methods will lead to the generation of multiple f and g models. In this paper, we denote these models as  f i  where  i 0 , 1 , 2 , N 1 , and  g j  where  j 0 , 1 , 2 , N 1 . Consequently, we update Equations (12) and (13) to Equations (18) and (19) as shown below.
f x , y = x , y log P y | x ; f i
g x , y = x , y log P x | y ; g i .
In order to further explore the potential of dual inference, we introduce multiple dual inference models  f , g  into the learning system. In dual structures, all models in the same direction map the space X to the space Y (or vice versa), resulting in functional similarities and variabilities. In this study, we generate multiple pairs of different  f , g  models through independent training using random seeds with different initializations and data access orders. Each proxy  f , g  model output provides feedback signals to  f i  or  g j , enabling the models to receive additional gains during training. Having multiple proxy models generally leads to more reliable, robust, and comprehensive feedback, similar to a majority vote of multiple experts, which is expected to improve the final model performance. Therefore, for any  q i 0 , v j 0 , where  i , j 0 , N 1 , we define the following Equations (20) and (21).
F q = i = 0 N 1 q i f i s . t i = 0 N 1 q i = 1
G v = j = 0 N 1 v i g i s . t j = 0 N 1 v i = 1 .
F q  and  G v  denote the sets of multiple original models and dual models, respectively. The equations that follow, with the constraint  s . t . , represent the corresponding constraints.
Figure 3 illustrates the overall framework of our proposed model. The robots, represented by  f 0 , f 1 , f 2  or  g 0 , g 1 , g 2 , symbolize models in the same direction. These models exhibit both similarities and differences in function. The different colors indicate that they are independently trained using random seeds with different initialization and data access orders. The  Δ x  and  Δ y  in the middle of Figure 3 represent the mutual feedback between the two models. By comparing the differences between the generated  x  and x, or between  y  and y, in a closed-loop system, providing feedback signals to the models can enhance their mutual training.  F q  and  G v  represent the sets of multiple original models and pairwise models, respectively. The primal task corresponds to sentiment classification for a given text, while the dual task involves sentence generation for polar sentiment labels.
To provide a more detailed explanation, we can divide Figure 3 into two separate framework diagrams, as shown in Figure 4 and Figure 5. In both diagrams, robots with the same shape represent proxy models in the same direction. Multiple proxy models collaborate to reason about a task simultaneously. This arrangement enables the dual tasks to receive feedback from multiple agent models, leading to improved results.
For any  x X , the proxies  f i  collaborate to generate a corresponding  y Y  using the function  y = F q x . They then work together to reconstruct the original input  x X  using the function  x = G v y . The reconstruction error between  y  and  y = F q x  is also taken into account. The resulting dual learning loss is defined by Equation (22).
d u a l M x , M y , F q , G v = 1 M x x M x Δ x x , G v F q x + 1 M y y M y Δ y y , F q G v y .
| M x |  represents the number of samples in the dataset  D x , while  | M y |  represents the number of samples in the dataset  D y F q  denotes the set of multiple original models, and  G v  represents the set of multiple pairwise models. The error  Δ x x , x  measures the discrepancy between the original input x and the reconstructed input  x  obtained from multiple primal models. This error serves as the feedback error necessary for updating the optimization model. Similarly,  Δ y y , y  represents the feedback error between the original output y and the reconstructed output  y  obtained from multiple dual models.
To enhance the model’s performance, we incorporate N model pairs  f , g  for joint training. When N = 1, the model reverts to a standard pairwise inference learning model. In summary, we implement the model using the Bi-dual inference algorithm (Algorithm 1). The algorithm begins by computing multiple f and g using Equations (10) and (11). During this computation process, the Lagrange parameters  λ x y  and  λ y x , as well as the trained and optimized parameters  θ x y  and  θ x y , are adjusted. After completing the first for-loop, the resulting multiple f and g are then used in Equations (20) and (21) for training, yielding the set  F q  of multiple primal models and the set  G v  of dual models. Finally, all parameters are fed into Equation (22) to calculate the loss function of the model.
Algorithm 1 Bi-dual inference algorithm
  • Input:
    • 1: Date  D x  and  D y ; Optimizers  L M A  and  L M B ; Hyper-parameter  α , β ; Beam search size K; Lagrange parameters  λ x y  and  λ y x ;
    • 2:  D x  : A set of short texts with emotional labels
    • 3:  D y  : A set of emotion labels
  • Output:
    • 4: Loss function of the model  d u a l M x , M y , F q , G v
    • 5: repeat
    • 6:     Training  f 0  and  g 0
    • 7:     Iteration number accumulation  t = t + 1
    • 8:     Get a random minibatch of m pairs  x j , y j j = 1 m
    • 9:     Calculate the model gradient according to Equations (16) and (17)
    • 10:     Update the parameters of  f 0  and  g 0  and return to get  f 0  ,  g 0
    • 11:      θ x y L M A θ x y , G f , θ y x L M B θ y x , G g
    • 12:     Training  f i  and  g j
    • 13:     for  i = 0 ; i < N 1 ; i + +  do
    • 14:         Initialise  f  and  g  according to Equations (18) and (19), then calculate the  f i  and  g j  according to Equations (10) and (11)
    • 15:          f i x = a r g min y Y α f x , y + 1 α g x , y
    • 16:          g j y = a r g min x X β f x , y + 1 β g x , y
    • 17:         Following Equations (20) and (21), Calculate the  F q  and  G v
    • 18:         Update the parameters and return the value of the model
    • 19:          d u a l d u a l M x , M y , F q , G v
    • 20:     end for
    • 21: until function convergence

4. Experiment

In this section, we will perform experimental comparisons and evaluations of the model to showcase the practicality and effectiveness of our proposed methods.

4.1. Datasets Description

The datasets utilized in our experiments are sourced from the 2013 and 2014 Natural Language Processing and Chinese Computing Conference (NLPCC). We refer to them as NLPCC2013 and NLPCC2014, respectively. These datasets are particularly suitable for emotion classification in natural language processing techniques. They encompass six emotion categories, namely happy, disgusted, sad, angry, surprised, and fearful. The NLPCC2013 dataset comprises 13,250 instances, including 4939 text data samples with emotions and 2174 long text data samples with emotions. On the other hand, the NLPCC2014 dataset consists of 48,875 instances, with 17,705 text data samples containing emotions and 5417 long text data samples containing emotions. For our study, 80% of these data are allocated as the training set, while the remaining 20% serve as the validation set. Further details regarding these two datasets can be found in Table 1.

4.2. Evaluation Metrics

To assess the effectiveness of our proposed model, we evaluate its performance using the following metrics: accuracy (P), recall (R), and F1-score.
  • TP (true positive) refers to the count of correctly predicted positive sentiment labels from positive sentences.
  • TN (true negative) refers to the count of correctly predicted negative sentiment labels from negative sentences.
  • FP (false positive) refers to the count of incorrectly predicted negative sentiment labels from positive sentences.
  • FN (false negative) refers to the count of incorrectly predicted positive emotion labels from negative sentences.
The precision (P), recall (R), F1-score and macro F1-score are thus defined as shown in Equations (23)–(28).
P r e c i s i o n = T P T P + F P
Re c a l l = T P T P + F N
F 1 s o c r e = 2 × P r e c i s i o n × Re c a l l P r e c i s i o n + Re c a l l
macro-F1 scores:
m a c r o P = a v e r a g e p 1 + p 2 + p n
m a c r o R = a v e r a g e r 1 + r 2 + r n
m a c r o F 1 = 2 × m a c r o P × m a c r o R m a c r o P + m a c r o R

4.3. Baseline Models

In this section, we discuss the baseline models and various parameter settings of the model. SVM (Support Vector Machine) [7] is a simple machine-learning-based method used for recognizing emotions in short texts. It categorizes the emotions into five types but does not consider the symbolic information present in the text.
  • LSTM (Long Short-Term Memory Network) [42] is a widely used variant of recurrent neural networks (RNN). It addresses the issue of gradient disappearance in traditional RNNs by introducing multiple gating units. The LSTM’s hidden state is used as the feature vector for sentiment classification of documents or sentences.
  • Bi-LSTM (Bidirectional Long Short-Term Memory) [25] combines Bi-LSTM with a multi-type attention mechanism. It primarily solves the problem of implicit emotion classification. However, experiments have shown that this method also yields effective results for explicit emotion classification, making it suitable for various emotion classification tasks.
  • CNN+BiLSTM (Convolutional Neural Network + Bidirectional Long Short-Term Memory) [43] is a hybrid approach that combines CNN and BiLSTM to extract sentiment features from Weibo short texts, aiming to achieve sentiment classification.
  • C-GRU (Context-aware Gated Recurrent Units) [44] extracts contextual information (topics) from tweets and incorporates it into an additional layer that determines the sentiment expressed in the tweets. This model learns a direct association between the sentiment layer and task prediction, enabling multi-label sentiment classification.
  • MF-CSEL (Multilingual Text Sentiment Analysis Model) [45] is based on training the CBOW (Continuous Bag-of-Words) model. It combines multiple languages and utilizes cost-sensitive integrated learning methods to perform multilingual sentiment analysis.

4.4. Experimental Results and Analysis

This thesis primarily examines the performance of the proposed model in six-element sentiment classification. Therefore, starting from this subsection, we will conduct a macro-level cross-sectional analysis of the proposed model’s sentiment classification performance compared to the baseline model mentioned above. Additionally, we will perform a micro-level comparison of the precision (P), recall (R), and F1-score (F) values between the proposed model and similar models in sentiment classification. Furthermore, we will conduct a series of detailed experiments on the proposed model itself, including exploring different model parameter settings, varying the sentence length used for training, and adjusting the data scale.

4.4.1. Macro Comparison Analysis of the Model with the Baseline Model

The effects of both our proposed model and the baseline model are demonstrated in Table 2 and Table 3. The results indicate that our proposed model improves precision (P) and  F 1  scores compared to the baseline model to a certain extent. In the two datasets, our proposed model achieves the highest  F 1  scores of 74.64% and 82.66% respectively. This represents an improvement of more than 5% over the other baseline model. Although the recall (R) is slightly lower in the NLPCC2013 dataset due to the mutual learning between the largest volume of training data and the unemotional text pairs, it increases significantly as the amount of training data for each sentiment label increases. The recall (R) increases from 67.37% to 80.22%.
Based on the information provided, we can draw the following conclusions from the two tables. Firstly, as the size of the training data increases from small to large, the overall performance on the NLPCC2014 dataset is better than that on the NLPCC2013 dataset. This can be attributed to the fact that more training data leads to the generation of more proxy pairs (original-pair pairs), resulting in increased feedback signals that enhance the learning of the original model with the pairwise models, ultimately leading to improved model results. Furthermore, the scores of the other baseline models indicate that deep-learning-based models (LSTM, Bi-LSTM, CNN-BiLSTM, C-GRU) generally outperform machine-learning-based models (SVM). However, the model MF-CSEL, which is based on an integrated learning model and designed for sentiment classification in multilingual texts, does not perform well on the monolingual dataset. Our proposed model shares some similarities with the integrated model, and the specific differences between them will be further explained in subsequent experiments. Figure 6 provides a visual representation of the variations between the models.

4.4.2. Micro Comparison Analysis of Model and Baseline Model

In this subsection, we provide a comprehensive comparison of the Bi-Dual Inference model with six other baseline models (SVM, LSTM, Bi-LSTM, CNN-BiLSTM, C-GRU, MF-CSEL) on two distinct datasets for six-tuple emotion classification (happy, sad, angry, surprised, fear, disgust). The results of this comparison are presented in Table 4 and Table 5.
Based on the information provided in the two tables, it is evident that our proposed Bi-Dual Inference model achieves the highest precision (P), recall (R), and  F 1  scores in multiple emotion label categories. This clearly demonstrates the effectiveness of our model in comparison to the other models.
Specifically, when comparing the NLPCC 2013 and NLPCC 2014 datasets, all models show improvement to some extent due to the increase in training data size. However, our proposed model exhibits the most significant improvement as a result of the larger training data size. This is attributed to the increased feedback signals between the dual model and primal model, leading to enhanced performance. This improvement is particularly evident in the ’fear’ emotion label, where the  F 1  score reaches its highest performance in our model, increasing from 55.51% to 61.82%. Additionally, there is a notable difference in performance for certain emotion labels, specifically ’happy’ and ’fear’. This discrepancy may be attributed to the fact that people tend to share more content that brings them joy on social networks, while fearful emotions are often not shared as frequently. Consequently, this results in a larger volume of certain emotions being present in the social network data.
To provide a visual representation of the performance comparison among the models, we extracted the precision (P), recall (R), and  F 1  scores for each of the six emotions presented in Table 4 and Table 5. These scores are depicted in Figure 7 and Figure 8.
By analyzing these two figures, it becomes evident that our proposed model outperforms the others in terms of precision (P), recall (R), and  F 1  scores for most sentiment labels. The scores for our model exhibit a more consistent trend without significant fluctuations. On the other hand, when examining the other models, the SVM model exhibits noticeably lower accuracy in terms of precision (P) and  F 1  scores specifically for the fear emotion labels. This suggests that the overall performance of the other models is comparatively lower in the analysis of the six emotion categories, with a higher proportion of correctly identified emotion labels out of the total number of correct labels identified.

4.4.3. Analysis of the Number of Iterations

In this section, we examine the sensitivity of the parameters in our proposed model. The parameters chosen for our model are based on the highest  F 1  score. Firstly, we present the accuracy (p) of our proposed model as a function of the number of training iterations in Figure 9a. From Figure 9a, it is evident that both the original model and the dual model exhibit significant fluctuations in the early stages. One possible explanation for this is that the interaction between the two models is not fully established at the beginning of the training process. However, as the training iterations progress, the two models enhance each other’s performance through signal feedback, leading to a gradual stabilization.

4.4.4.  λ  Parameter Analysis

We examine the relationship between the Lagrangian hyperparameter  λ  in Equation (16) and the accuracy of our proposed model. The detailed experimental results are presented in Figure 9b. Since the structure is dyadic in nature, we set  λ x = λ y = λ  for simplicity, resulting in a trend plot of the accuracy with respect to  λ  for our proposed model. From the graph, it is evident that as  λ  increases gradually, the model becomes more effective. The original model performs optimally with the dyadic structure when  λ = 0.025 .

4.4.5.  α  Parameter Analysis

In this subsection, we present an experimental discussion and analysis of the parameter  α  in Equation (10). The detailed experimental results are illustrated in Figure 10.
Figure 10a displays the comparison of the  F 1  scores between our proposed model and other baseline models for the interval  α 0 , 0.1 . Similarly, Figure 10b presents the comparison of the  F 1  scores for the interval  α 0 , 1 . It is evident that our proposed model achieves the best performance when  α = 0.02 . While the other models are not significantly affected by the parameter  α , as their  F 1  scores remain relatively stable and unchanged. From Figure 10b, we can conclude that our model outperforms the other baseline models across a wider range of  α  values.

4.4.6. Input Sentence Length Analysis

To assess the sensitivity of various models to the length of input sentences, we deliberately partitioned the training set into multiple sub-training sets with varying lengths. The detailed experimental results are depicted in Figure 11a.
The results presented in Figure 11a clearly demonstrate that the proposed Bi-Dual Inference model outperforms other baseline models in terms of overall performance. Notably, the performance of the Bi-Dual Inference model is optimal when the sentence length is 20. On the other hand, the LSTM model also exhibits a noticeable trend. As the length of the training sentence increases, the model tends to forget more information, resulting in diminished performance.

4.4.7. Different Batch Size K Analysis

If the batch size is denoted as k, we select K samples at each iteration and compute the corresponding parameter adjustment values. The average value of these adjustment values is then utilized as the final adjustment value to update the parameters of the social network. Increasing the batch size K can lead to faster processing speed in our proposed model, as well as reduced training fluctuations, thereby facilitating convergence. By ensuring a consistent sample size at each iteration, stable adjustment values are obtained, benefiting all samples. Conversely, if the batch size K is small, such as  K = 1 , the resulting adjustment value may exhibit random properties. While it may be the optimal adjustment method for a specific sample, it may not adapt well to other samples. Consequently, the training process will exhibit continuous fluctuations, making it challenging for our proposed model to converge. Typically, this parameter is set between 1 and 40 in conventional models. For detailed experimental results, please refer to Figure 11b.
The analysis of Figure 11b reveals that the previous batch size K was too small, rendering our proposed model ineffective due to insufficient data for learning. As the batch size increases, our proposed model achieves optimal performance at  K = 20 . However, when  K < 5 , the model training fails to converge due to the small batch size. In comparison to other models, the LSTM model exhibits a gradual decrease in the  F 1  score with an increase in batch size. On the other hand, the Bi-LSTM model performs better as it considers more information. The overall performance of the CNN+BiLSTM model is comparable to our proposed model. Nevertheless, in general, our proposed model outperforms other models.

4.4.8. Analysis of Different Hidden Layer Sizes

In the original model, a recurrent neural network (RNN) is utilized. Apart from the input and output layers, all other layers, including the convolutional layer, activation layer, pooling layer, and fully connected layer, are considered as hidden layers. Increasing the number of hidden layers in a neural network allows for the extraction of more features, leading to improved learning outcomes. However, it is important to note that the performance of the neural network does not necessarily improve with an increasing number of hidden layers. This is because it can lead to the problem of gradient explosion, and there is a limit to the number of layers a neural network can have. As the number of hidden layers increases, the network will have more parameters, resulting in longer training times. Therefore, setting an appropriate number of hidden layers significantly impacts the performance of the proposed model. The experimental results are depicted in Figure 11c.
The figure clearly demonstrates that our proposed model achieves optimal performance when the hidden layer size is set to 250. Among the other baseline models, such as CNN + BiLSTM, MF-CSEL, and C-GRU, consistent and stable performance is observed across different hidden layer sizes. However, when the hidden layer size is increased to 400, the LSTM model exhibits poor performance, and is outperformed by the Bi-LSTM model.

5. Conclusions and Future Work

Research on social network emotions holds significant importance for personal development, business marketing, administration, politics, and more. Emotion analysis in social networks plays a crucial role in effectively managing the development and evolution of public opinion. However, existing research mostly categorizes users’ emotions into two or three broad categories, such as positive, negative, and neutral. There is a scarcity of detailed classification research on emotions in Chinese short texts. Analyzing user emotions from generated text information has become an urgent problem. In light of this, we combine the current methodology of dual learning and focus on analyzing the six-element emotions in Chinese text information. Our work involves the following tasks:
  • In order to enhance the recognition and classification of emotions and address the issue of poor classification performance due to limited training data for emotional subcategories, we employ the concept of dual learning. This approach involves mutual training and learning between the original model and the dual model using a smaller amount of labeled data. In our implementation, we utilize the recurrent neural network LSTM within the framework of deep learning technology to construct both the original model and the dual model. We also provide a detailed explanation of the parameter composition and its mathematical expression.
  • Our findings indicate that agent models with the same orientation exhibit similar functionalities with certain distinctions. Consequently, we generate multiple primal-dual model pairs by combining the aforementioned approaches with varying initialization and data access sequences. In order to enhance sentiment classification results, we propose a dual dual inference sentiment classification framework. This is accomplished by utilizing different initialization and data access sequences. By employing seeds, we generate multiple distinct dual models and original models. These similar models can then be utilized to reason a single task in the same direction, thereby achieving improved sentiment classification outcomes.
  • In order to validate the effectiveness of our proposed model in the field of emotion classification, we conducted a comprehensive set of comparative experiments specifically targeting the six-element emotion classification. Firstly, we conducted a macro-level analysis to compare the classification performance of our proposed model against other models such as LSTM, Bi-LSTM, CNN+BiLSTM, C-GRU, and MF-CSEL. Subsequently, we performed a microscopic analysis focusing on the six-element sentiment classification of our proposed model. Additionally, we separately analyzed the hyperparameters involved in the experiments. The results of the comparisons effectively demonstrate the efficacy of our proposed model. The experiments also reveal that our proposed model exhibits notable improvements in sentence generation tasks, thereby highlighting its feasibility and potential.
However, our proposed model does bear certain limitations. To address this, we aim to enhance and optimize our model by collecting a wider range of sentiment data using web crawlers and other methods. This will enable us to extend the applicability of our model to other language datasets. Additionally, we believe that integrating our proposed models with key node characteristics in social networks for sentiment analysis research can provide valuable insights into opinion trends in specific domains.

Author Contributions

Conceptualization & writing, X.H.; Review & editing, Y.D.; Writing—original draft & experimental verification, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 61872298), and the Science and Technology Department of Sichuan Province (No. 2021YFQ0008, No. 2023YFQ0044).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no known competing financial interest or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Baccianella, S.; Esuli, A.; Sebastiani, F. SENTIWORDNET 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. In Proceedings of the 7th International Conference on Language Resources and Evaluation (LREC’10), Valletta, Malta, 17–23 May 2010; pp. 2200–2204. [Google Scholar]
  2. Desmet, B.; Hoste, V. Emotion detection in suicide notes. Expert Syst. Appl. 2013, 40, 6351–6358. [Google Scholar] [CrossRef]
  3. Cai, Y.; Yang, K.; Huang, D.; Zhou, Z.; Lei, X.; Xie, H.; Wong, T.L. A hybrid model for opinion mining based on domain sentiment dictionary. Int. J. Mach. Learn. Cybern. 2019, 10, 2131–2142. [Google Scholar] [CrossRef]
  4. Bandhakavi, A.; Wiratunga, N.; Padmanabhan, D.; Massie, S. Lexicon based feature extraction for emotion text classification. Pattern Recognit. Lett. 2017, 93, 133–142. [Google Scholar] [CrossRef]
  5. Taboada, M.; Brooke, J.; Tofiloski, M.; Voll, K.D.; Stede, M. Lexicon-Based Methods for Sentiment Analysis. Comput. Linguist. 2011, 37, 267–307. [Google Scholar] [CrossRef]
  6. Wu, Y.; Kang, X.; Matsumoto, K.; Yoshida, M.; Xielifuguli, K.; Kita, K. Sentence Emotion Classification for Intelligent Robotics Based on Word Lexicon and Emoticon Emotions. In Proceedings of the IEEE International Conference of Intelligent Robotic and Control Engineering (IRCE), Lanzhou, China, 24–27 August 2018. [Google Scholar]
  7. Liu, Y.; Bi, J.W.; Fan, Z.P. A method for multi-class sentiment classification based on an improved one-vs-one (OVO) strategy and the support vector machine (SVM) algorithm. Inf. Sci. 2017, 394, 38–52. [Google Scholar] [CrossRef]
  8. Catal, C.; Nangir, M. A sentiment classification model based on multiple classifiers. Appl. Soft Comput. 2017, 50, 135–141. [Google Scholar] [CrossRef]
  9. Eko Soelistio, Y.; Martinus, R.S.S. Simple Text Mining for Sentiment Analysis of Political Figure Using Naive Bayes Classifier Method. arXiv 2015, arXiv:1508.05163. [Google Scholar]
  10. Ouyang, X.; Zhou, P.; Li, C.H.; Liu, L. Sentiment Analysis Using Convolutional Neural Network. In Proceedings of the IEEE International Conference on Computer and Information Technology, Dhaka, Bangladesh, 21–23 December 2015. [Google Scholar]
  11. Santos, C.N.D.; Gattit, M. Deep Convolutional Neural Networks for Sentiment Analysis of Short Texts. In Proceedings of the International Conference on Computational Linguistics, Dublin, Ireland, 23–29 August 2014. [Google Scholar]
  12. Wawre, S.V.; Deshmukh, S.N. Sentiment Classification using Machine Learning Techniques. Int. J. Sci. Res. 2016, 5, 819–821. [Google Scholar]
  13. Dhiman, A.; Kumar, D. Sentiment Analysis Approach based N-gram and KNN Classifier. Int. J. Comput. Appl. 2018, 182, 29–32. [Google Scholar] [CrossRef]
  14. Pang, B. Thumbs up? Sentiment Classification Using Machine Learning Techniques. In Proceedings of the EMNLP, Philadelphia, PA, USA, 6–7 July 2002. [Google Scholar]
  15. Rezwanul, M.; Ali, A.; Rahman, A. Sentiment Analysis on Twitter Data using KNN and SVM. Int. J. Adv. Comput. Sci. Appl. 2017, 8, 080603. [Google Scholar] [CrossRef]
  16. Xue, J.; Chen, J.; Hu, R.; Chen, C.; Zheng, C.D.; Zhu, T. Twitter discussions and concerns about COVID-19 pandemic: Twitter data analysis using a machine learning approach. J. Med. Internet Res. 2020, 22, e20550. [Google Scholar] [CrossRef] [PubMed]
  17. Liu, L.; Feng, S.; Wang, D.; Zhang, Y. An Empirical Study on Chinese Microblog Stance Detection Using Supervised and Semi-supervised Machine Learning Methods. In Proceedings of the International Conference on Computer Processing of Oriental Languages; CCF Conference on Natural Language Processing and Chinese Computing, Kunming, China, 2–6 December 2016. [Google Scholar]
  18. Jiang, F.; Liu, Y.Q.; Luan, H.B.; Sun, J.S.; Zhu, X.; Zhang, M.; Ma, S.P. Microblog Sentiment Analysis with Emoticon Space Model. J. Comput. Sci. Technol. 2015, 30, 1120–1129. [Google Scholar] [CrossRef]
  19. Gopalakrishnan, K.; Salem, F.M. Sentiment Analysis Using Simplified Long Short-term Memory Recurrent Neural Networks. arXiv 2020, arXiv:2005.03993. [Google Scholar]
  20. Ahmed, M.; Chen, Q.; Li, Z. Constructing domain-dependent sentiment dictionary for sentiment analysis. Neural Comput. Appl. 2020, 32, 14719–14732. [Google Scholar] [CrossRef]
  21. Ren, H.; Wang, W.; Qu, X.; Cai, Y. A new hybrid-parameter recurrent neural network for online handwritten chinese character recognition. Pattern Recognit. Lett. 2019, 128, 400–406. [Google Scholar] [CrossRef]
  22. Himeno, S.; Aono, M. KDE-AFFECT at SemEval-2018 Task 1: Estimation of Affects in Tweet by Using Convolutional Neural Network for n-gram. In Proceedings of the 12th International Workshop on Semantic Evaluation 2018, New Orleans, LA, USA, 5–6 June 2018; pp. 156–161. [Google Scholar]
  23. Can, E.F.; Ezen-Can, A.; Can, F. Multilingual Sentiment Analysis: An RNN-Based Framework for Limited Data. arXiv 2018, arXiv:1806.04511. [Google Scholar]
  24. Xing, Y.; Xiao, C.; Wu, Y.; Ding, Z. A Convolutional Neural Network for Aspect-Level Sentiment Classification. Int. J. Pattern Recognit. Artif. Intell. 2019, 33, 1–13. [Google Scholar] [CrossRef]
  25. Wei, J.; Liao, J.; Yang, Z.; Wang, S.; Zhao, Q. BiLSTM with Multi-Polarity Orthogonal Attention for Implicit Sentiment Analysis. Neurocomputing 2019, 383, 165–173. [Google Scholar] [CrossRef]
  26. Mikolov, T.; Chen, K.; Corrado, G.; Dean, J. Efficient estimation of word representations in vector space. arXiv 2013, arXiv:1301.3781. [Google Scholar]
  27. Li, X.; Feng, S.; Wang, D.; Zhang, Y. Context-aware emotion cause analysis with multi-attention-based neural network. Knowl. Based Syst. 2019, 174, 205–218. [Google Scholar] [CrossRef]
  28. de Bruyne, L.; de Clercq, O.; Hoste, V. LT3 at SemEval-2018 Task 1: A classifier chain to detect emotions in tweets. In Proceedings of the 12th International Workshop on Semantic Evaluation, New Orleans, LA, USA, 5–6 June 2018; pp. 123–127. [Google Scholar]
  29. Turcu, R.A.; Amarandei, S.M.; Flescan-Lovin-Arseni, I.A.; Gifu, D.; Trandabat, D. EmoIntens Tracker at SemEval-2018 Task 1: Emotional Intensity Levels in Tweets. In Proceedings of the 12th International Workshop on Semantic Evaluation 2018, New Orleans, LA, USA, 5–6 June 2018; pp. 177–180. [Google Scholar]
  30. Rajabi, Z.; Shehu, A.; Uzuner, O. A Multi-channel BiLSTM-CNN Model for Multilabel Emotion Classification of Informal Text. In Proceedings of the 14th IEEE International conference semantic computing, San Diego, CA, USA, 3–5 February 2020; pp. 303–306. [Google Scholar]
  31. Lu, K.; Wu, J. Sentiment analysis of film review texts based on sentiment dictionary and SVM. In Proceedings of the 3rd International Conference on Innovation in Artificial Intelligence, Suzhou, China, 15–18 March 2019; Volume Part F148152, pp. 73–77. [Google Scholar]
  32. Chen, L.C.; Lee, C.M.; Chen, M.Y. Exploration of social media for sentiment analysis using deep learning. Soft Comput. 2020, 24, 8187–8197. [Google Scholar] [CrossRef]
  33. Keshtkar, F.; Inkpen, D. A hierarchical approach to mood classification in blogs. Nat. Lang. Eng. 2012, 18, 61–81. [Google Scholar] [CrossRef]
  34. Mukwazvure, A.; Supreethi, K. A hybrid approach to sentiment analysis of news comments. In Proceedings of the 4th International Conference on Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions), Noida, India, 2–4 September 2015; pp. 1–6. [Google Scholar]
  35. Esmin, A.A.; De Oliveira, R.L., Jr.; Matwin, S. Hierarchical classification approach to emotion recognition in twitter. In Proceedings of the 11th International Conference on Machine Learning and Applications 2012, Boca Raton, FL, USA, 12–15 December 2012; Volume 2, pp. 381–385. [Google Scholar]
  36. Xia, Y.; Qin, T.; Chen, W.; Bian, J.; Yu, N.; Liu, T.Y. Dual Supervised Learning. In Proceedings of the International Conference on Machine Learning 2017, Sydney, Australia, 6–11 August 2017; Volume 70, pp. 3789–3798. [Google Scholar]
  37. Xia, Y.; Bian, J.; Qin, T.; Yu, N.; Liu, T.Y. Dual Inference for Machine Learning. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, Melbourne, Australia, 19–25 August 2017; pp. 3112–3118. [Google Scholar]
  38. Xia, Y.; Tan, X.; Tian, F.; Qin, T.; Yu, N.; Liu, T.Y. Model-Level Dual Learning. In Proceedings of the International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; Volume 80, pp. 5379–5388. [Google Scholar]
  39. Dai, A.M.; Le, Q.V. Semi-Supervised Learning with Heterophily. arXiv 2015, arXiv:1412.3100. [Google Scholar]
  40. Wang, T.; Cho, K. Larger-context language modelling with recurrent neural network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Berlin, Germany, 7–12 August 2016; Volume 3, pp. 1319–1329. [Google Scholar]
  41. He, D.; Xia, Y.; Qin, T.; Wang, L.; Yu, N.; Liu, T.Y.; Ma, W.Y. Dual learning for machine translation. In Proceedings of the Advances in Neural Information Processing Systems 2016, Barcelona, Spain, 5–10 December 2016; pp. 820–828. [Google Scholar]
  42. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735. [Google Scholar] [CrossRef] [PubMed]
  43. Ling, M.; Chen, Q.; Sun, Q.; Jia, Y. Hybrid Neural Network for Sina Weibo Sentiment Analysis. IEEE Trans. Comput. Soc. Syst. 2020, 7, 983–990. [Google Scholar] [CrossRef]
  44. Li, L.; Yang, M. A Context-Aware Gated Recurrent Units with Self-Attention for Emotion Recognition. J. Phys. Conf. Ser. 2021, 1880, 012026. [Google Scholar] [CrossRef]
  45. Xu, Y.; Chai, Y.; Wang, L.; Yuan, F. Multilingual Text Emotional Analysis Model MF-CSEL. J. Chin. Comput. Syst. 2019, 40, 1026–1033. [Google Scholar]
Figure 1. The sequence autoencoder for the sequence “WXYZ”.
Figure 1. The sequence autoencoder for the sequence “WXYZ”.
Applsci 13 09957 g001
Figure 2. Sentence generation model.
Figure 2. Sentence generation model.
Applsci 13 09957 g002
Figure 3. Bi-Dual Inference model.
Figure 3. Bi-Dual Inference model.
Applsci 13 09957 g003
Figure 4. Partial primal task frame diagram.
Figure 4. Partial primal task frame diagram.
Applsci 13 09957 g004
Figure 5. Partial dual task frame diagram.
Figure 5. Partial dual task frame diagram.
Applsci 13 09957 g005
Figure 6. Overall performance of different models on NLPCC2013 and NLPCC2014.
Figure 6. Overall performance of different models on NLPCC2013 and NLPCC2014.
Applsci 13 09957 g006
Figure 7. Sentiment classification comparison chart of different models on NLPCC2013 dataset.
Figure 7. Sentiment classification comparison chart of different models on NLPCC2013 dataset.
Applsci 13 09957 g007
Figure 8. Sentiment classification comparison chart of different models on NLPCC2014 dataset.
Figure 8. Sentiment classification comparison chart of different models on NLPCC2014 dataset.
Applsci 13 09957 g008
Figure 9. Model parameter analysis. (a) The accuracy of the model varies with the number of iterations; (b λ  Impact on model performance.
Figure 9. Model parameter analysis. (a) The accuracy of the model varies with the number of iterations; (b λ  Impact on model performance.
Applsci 13 09957 g009
Figure 10. α  impact on model performance.
Figure 10. α  impact on model performance.
Applsci 13 09957 g010
Figure 11. Parameter sensitivity analysis.
Figure 11. Parameter sensitivity analysis.
Applsci 13 09957 g011
Table 1. NLPCC2013 and NLPCC2014 Datasets.
Table 1. NLPCC2013 and NLPCC2014 Datasets.
NLPCC2013 Dataset
Emotion categoryHappyAngrySadFearSurprisedDisgustedNone
Sample size1562575670922488006648
Test sample39114416823632041663
NLPCC2014 Dataset
Emotion categoryHappyAngrySadFearSurprisedDisgustedNone
Sample size651117122230265721274624,936
Test sample1627428557661806866234
Table 2. Overall performance of different models on NLPCC2013 data.
Table 2. Overall performance of different models on NLPCC2013 data.
ModelsMacro P%Macro R%Macro F1%
SVM52.4147.1249.63
LSTM56.8858.0257.54
Bi-LSTM56.3962.1560.01
CNN-BiLSTM78.1565.2271.10
C-GRU60.0379.9669.06
MF-CSEL53.0560.1156.35
Bi-Dual Inference83.6767.3774.64
Table 3. Overall performance of different models on NLPCC2014 data.
Table 3. Overall performance of different models on NLPCC2014 data.
ModelsMacro P%Macro R%Macro F1%
SVM55.1447.1650.59
LSTM59.0258.4758.74
Bi-LSTM56.5772.4463.53
CNN-BiLSTM81.8766.7973.57
C-GRU63.9577.4370.05
MF-CSEL59.3962.5560.92
Bi-Dual Inference85.2680.2282.66
Table 4. Sentiment classification performance of different models on NLPCC2013 dataset.
Table 4. Sentiment classification performance of different models on NLPCC2013 dataset.
EmotionModelEvaluation MetricsEmotionModelEvaluation Metrics
P%R%F1%P%R%F1%
HappySVM55.2265.3159.84DisgustedSVM50.1552.3151.20
LSTM57.4426.7636.51LSTM52.4643.3647.54
Bi-LSTM67.0260.3063.48Bi-LSTM55.7748.2251.72
CNN-BiLSTM80.1265.5572.10CNN-BiLSTM60.2265.8862.92
C-GRU55.7180.0365.69C-GRU62.1555.7758.78
MF-CSEL68.0069.5268.70MF-CSEL65.5570.2067.79
Bi-Dual Inference85.5368.8876.30Bi-Dual Inference70.0273.5571.74
SurprisedSVM16.2236.2422.40FearSVM20.3366.7731.16
LSTM52.4147.1249.63LSTM27.1236.7731.24
Bi-LSTM45.5135.6239.94Bi-LSTM54.8950.7752.99
CNN-BiLSTM55.7753.2754.49CNN-BiLSTM60.1156.2258.09
C-GRU50.2238.7743.75C-GRU45.2140.6642.81
MF-CSEL37.2132.6634.70MF-CSEL16.5032.7221.93
Bi-Dual Inference60.2365.5562.77Bi-Dual Inference50.1162.2255.51
AngrySVM25.3255.7034.81SadSVM26.9560.3437.25
LSTM51.4351.9733.97LSTM57.5156.5856.88
Bi-LSTM50.1240.5244.79Bi-LSTM52.2050.3151.23
CNN-BiLSTM66.7368.9267.80CNN-BiLSTM60.8054.4857.46
C-GRU55.3560.0257.59C-GRU59.9970.2264.70
MF-CSEL47.7133.1539.05MF-CSEL69.0233.4145.03
Bi-Dual Inference65.2070.2267.61Bi-Dual Inference70.2575.5872.81
Table 5. Sentiment classification performance of different models on NLPCC2014 dataset.
Table 5. Sentiment classification performance of different models on NLPCC2014 dataset.
EmotionModelEvaluation MetricsEmotionModelEvaluation Metrics
P%R%F1%P%R%F1%
HappySVM61.0069.1064.80DisgustedSVM56.3962.1560.01
LSTM54.8858.3656.49LSTM56.5772.4463.53
Bi-LSTM68.2262.1565.04Bi-LSTM52.3155.3356.86
CNN-BiLSTM82.2170.2275.74CNN-BiLSTM64.5570.0267.17
C-GRU56.2278.0565.36C-GRU58.3350.7554.27
MF-CSEL65.9079.6072.61MF-CSEL66.7770.0868.38
Bi-Dual Inference86.6875.2280.54Bi-Dual Inference72.2575.3873.78
SurprisedSVM14.2039.1220.83FearSVM25.6678.8138.62
LSTM52.6354.2453.22LSTM53.5052.9553.22
Bi-LSTM48.8840.2244.12Bi-LSTM59.5051.8855.42
CNN-BiLSTM60.5865.9963.16CNN-BiLSTM62.3454.3758.08
C-GRU49.3345.7747.48C-GRU50.2245.1047.52
MF-CSEL49.2132.6639.22MF-CSEL17.2532.7422.55
Bi-Dual Inference63.5568.2265.80Bi-Dual Inference55.2270.2261.82
AngrySVM29.9266.7041.31SadSVM30.7165.2241.75
LSTM51.0547.1350.65LSTM53.4442.9247.61
Bi-LSTM55.2150.3152.64Bi-LSTM53.1049.3351.14
CNN-BiLSTM70.2272.1171.15CNN-BiLSTM65.6660.7763.12
C-GRU60.0958.2259.14C-GRU65.1169.2767.12
MF-CSEL54.4045.2149.33MF-CSEL49.0053.3051.22
Bi-Dual Inference70.3174.2272.21Bi-Dual Inference71.0576.6373.73
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, X.; Zhou, Y.; Du, Y. A Novel Bi-Dual Inference Approach for Detecting Six-Element Emotions. Appl. Sci. 2023, 13, 9957. https://doi.org/10.3390/app13179957

AMA Style

Huang X, Zhou Y, Du Y. A Novel Bi-Dual Inference Approach for Detecting Six-Element Emotions. Applied Sciences. 2023; 13(17):9957. https://doi.org/10.3390/app13179957

Chicago/Turabian Style

Huang, Xiaoping, Yujian Zhou, and Yajun Du. 2023. "A Novel Bi-Dual Inference Approach for Detecting Six-Element Emotions" Applied Sciences 13, no. 17: 9957. https://doi.org/10.3390/app13179957

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop