Next Article in Journal
Council Press Offices as Sources of Political Information: Between Journalism for Accountability and Propaganda
Previous Article in Journal
Technology Enhanced Learning Using Humanoid Robots
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cooperation in Social Dilemmas: A Group Game Model with Double-Layer Networks

College of Computer Science and Technology, Jilin University, Changchun 130012, China
*
Author to whom correspondence should be addressed.
Future Internet 2021, 13(2), 33; https://doi.org/10.3390/fi13020033
Submission received: 27 December 2020 / Revised: 19 January 2021 / Accepted: 24 January 2021 / Published: 27 January 2021

Abstract

:
The combination of complex networks and game theory is one of the most suitable ways to describe the evolutionary laws of various complex systems. In order to explore the evolution of group cooperation in multiple social dilemmas, a model of a group game with a double-layer network is proposed here. Firstly, to simulate a multiplayer game under multiple identities, we combine a double-layer network and public goods game. Secondly, in order to make an individual’s strategy selection process more in line with a practical context, a new strategy learning method that incorporates individual attributes is designed here, referred to as a “public goods game with selection preferences” (PGG-SP), which makes strategic choices that are more humane and diversified. Finally, a co-evolution mechanism for strategies and topologies is introduced based on the double-layer network, which effectively explains the dynamic game process in real life. To verify the role of multiple double-layer networks with a PGG-SP, four types of double-layer networks are applied in this paper. In addition, the corresponding game results are compared between single-layer, double-layer, static, and dynamic networks. Accordingly, the results show that double-layer networks can facilitate cooperation in group games.

1. Introduction

Complex networks can productively depict different systems in society and nature [1], in which the edges of the network represent the relationships between systems. Moreover, interactions are often multidimensional, for example, people are sometimes in multiple social networks [2] and ecosystems interact in various ways at the same time [3]. Therefore, many researchers in the field of complex network research have gradually begun to focus on multi-layer networks instead of single-layer networks. Accordingly, many research results may be applied to everyday life. For instance, the discovery and utilization of multi-layer social networks [4], communication networks [5], and satellite networks [6], etc. To summarize, multi-layer network research is crucial for development in many fields.
A good relationship exists between game theory and complex networks, since cooperation is the cornerstone of human development [7,8]. Accordingly, cooperation among rational individuals can be explored thanks to the discovery of game theory. Meanwhile, complex networks provide a tool for describing game relationships between individuals. Typical game models include the prisoner’s dilemma [9], snowdrift game [10], and stag hunting game [11]. The key information of a game includes the participants, strategies, and payoffs. The method of strategy learning directly affects the payoff of the game and the situation of cooperation. Thus, it is very important to choose an appropriate strategy learning method.
Based on the above, in recent years, an increasing number of researchers have used a combination of complex networks and game theory to explain the evolution of abundant complex systems, especially in terms of the exploration of the emergence of cooperative behavior [12]; however, in previous studies, most evolutionary games are based on one-dimensional networks. Recently, research has shifted from a single isolated network to an interdependent double-layer network. Moreover, the previous research with game theory has not considered personal characteristics. In fact, game behaviors will be affected by personal attributes in a practical scenario.
In order to solve these problems and attract richer research, this paper proposes a “public goods game with selection preferences” (PGG-SP) model that uses a double-layer complex network that is based on reality, in which the double-layer theoretical framework is paired with evolutionary multiplayer game theory. We find that human strategies are composed of long-term selection tendencies and momentary decisions. Therefore, this paper incorporates individual preference attributes into the learning rules for game strategies. The double-layer complex network can usefully illustrate the multiple identities of individuals in the real world. Subsequently, the PGG-SP model can beneficially express the evolution of human strategies in scenarios involving conflicts between collective and personal interests. In short, this model can explain the cooperative evolution problems faced by human beings at multiple levels of social life at the same time in terms of the pertinent influences. In addition, we further explore the effect of networks and game strategies in terms of co-evolution for cooperation. Our results provide a basis for making better human decisions when facing social dilemmas.
The main contributions of this article are summarized as follows:
  • We merge an improved group game with double-layer networks and demonstrate the evolution of cooperation in the double-layer networks based on small-world and scale-free networks. By comparing the evolution behavior of single- and double-layer networks, it is proven that double-layer networks can promote cooperation in multiplayer games. Subsequently, we further discuss the reasons for this phenomenon and the roles of the important parameters in the network.
  • We propose a model called PGG-SP to overcome the problem of previous strategy learning methods not considering individual characteristics, in which an individual’s preferred characteristics represent the probability that an individual tends to cooperate in the game. This attribute is the result of individual accumulation through the long-term game. Thus, this assumption better conforms with the process of making decisions in a practical scenario.
  • To imitate dynamic social interaction between individuals, a dynamic double-layer network is designed which realizes the co-evolution behavior of game strategy and network topology. It is proven that the adaptive adjustment of the strategy and structure can credibly enhance cooperation and maintain stability by comparing the cooperation performance between dynamic, static, single-layer, and double-layer networks. In addition, we construct a dynamic heterogeneous double-layer network to depict a practical scenario.
In the rest of the paper, the related work is introduced in Section 2. Section 3 describes the experimental model in detail. Section 4 provides the experimental results and analysis. Finally, Section 5 presents a summary of the conclusions.

2. Related Work

2.1. Related Definition

2.1.1. Complex Networks

Complex networks [13] now exist everywhere. Accordingly, various complex networks have received increasing attention from researchers. The following will briefly introduce typical complex networks:
  • In a regular network [14], nodes are connected according to specific rules. For example, any nodes may be connected in pairs to form a fully connected network. The structure of a regular network is simple and has certain limitations for describing real-world networks.
  • Random networks [15] were proposed in 1959. Assuming that the network size is N, there will be at most N ( N 1 ) / 2 edges between nodes, and the probability p is used to determine whether these edges can be connected, thereby generating a random network. There are p N ( N 1 ) / 2 edges in the network, where p [ 0 , 1 ] .
  • A small-world network [16] defines the total number of network nodes as N, where all nodes are connected to the left and right nodes and then randomly reconnected with an edge with a fixed probability.
  • In a scale-free network [17], first, there are M isolated nodes in the network. Second, a new node with M 0 edges is added at each time step t. Finally, the new node is connected to M 0 existing nodes ( M 0 M ) .
This paper constructs a double-layer Newman–Watts [18] network (D-NW) and a double-layer Barabási–Albert [17] network (D-BA). The double-layer complex network is represented by G = (V, E), where E   =   { E 1 , E 2 } , where represents the set of respective edges and V   =   { V 1 , V 2 } is a collection of respective nodes. In addition, | V 1 | = | V 2 | denotes that the number of nodes is the same in two networks. Accordingly, E 1   =   { ( v 11 , v 12 ) | v 11 , v 12 V 1 } and E 2   =   { ( v 21 , v 22 ) | v 21 , v 22 V 2 } represent the edges of the two layers, respectively. At the same time, the edge between the double-layer network should satisfy E 12 =   { ( v 1 i , v 2 i ) | v 1 i V 1 , v 2 i V 2 , i N } , where N is the node size of the single-layer network. The index i refers to the same nodes in the two layers so that node i in the upper layer corresponds to node i in the lower layer. In other words, this means that if there is an edge between two layers, it can only be between the same nodes in different layers. A conceptual diagram of the network considered here is shown in Figure 1.

2.1.2. Public Goods Games

In public goods games (PGGs) [19], assuming that the number of groups is N, every individual is independent, for which there is a public pool. Every individual has an opportunity to insert coins into a pool. It is stipulated that an individual who chooses to put a coin is a cooperator (C), otherwise it is a defector (D). Finally, the coins in the pool are shared equally among all the individuals. Thus, defectors, who have not inserted coins, will gain more than cooperators in a single round in a PGG. As a result, defection becomes an advantageous option; however, if every individual chooses to defect, there are no coins in the public pool and no individual will benefit. Accordingly, a social dilemma arises, representing the conflict between individual and collective interests, which is a dilemma that is often encountered in everyday life.

2.2. Recent Research

Network topologies are of great significance in studying the mechanism of cooperation emergence in a game. Most previous studies have used one-dimensional networks; however, one-dimensional networks make it difficult to illustrate the complexity of realistic networks as humans usually operate in multi-layer social relationships, such as virtual and face-to-face interactions, which means that networks are often interdependent or interconnected [20]. Therefore, many researchers have begun to pay attention to the research of multi-layer networks recently. For instance, Boccaletti et al. [21] considered a multi-layer network as a network set composed of n networks. If there are connecting edges between layers, they must feature the same node between different layers, as shown in Figure 2. In addition, a dynamic network topology is more conducive to the exploration of performance characteristics in complex systems [22,23]. For example, Wang et al. [24] studied the impact of a dynamically increasing network scale in a public goods game. Li et al. [25] studied the co-evolution of strategies and structures with an evolutionary network with a priority participation mechanism.
Previous studies on double-layer networks have generally been based on regular networks or random networks; however, it is not certain that every node has a double-layer structure. In order to overcome the shortcomings of this network structure, some researchers have begun to explore double-layer network structures based on small-world or scale-free networks. For example, Liu et al. [26] used double-layer small-world networks to simulate, and Duh et al. [27] constructed multi-layer scale-free networks to discover new paths for cooperation. Based on previous reliable research, and in order to effectively simulate actual network conditions, this paper uses small-world and scale-free network models for the basic construction of double-layer networks. In addition, every node has a double-layer network structure, which is more in accordance with real conditions.
In fact, people are not completely rational egoists. Although defection is the best choice for individuals to maximize their interests, individuals are still affected by other factors, making it possible for individuals to cooperate in a PGG. Nowak summarized five reciprocity rules in his research: kin selection, direct reciprocity, indirect reciprocity, network reciprocity, and group selection [28], such that the “tragedy of the commons” [29] can be avoided. Subsequently, in order to explore the problem of group cooperation in a PGG, many researchers have paid attention to PGGs under special conditions, such as a distributed mechanism [30], reward and punishment [31,32,33], societal exclusion [34,35,36], and thresholds [37].
In addition, strategy learning is an important part of a PGG. At present, many strategy learning mechanisms have been discovered. Some representative methods are the unconditional imitation [38], proportional learning [39], Fermi rule [40] and Moran rule [41] methods, among others.
It can be found that most of the classic strategy learning mechanisms are based on different calculation methods for learning probabilities and do not consider the attributes of individuals. To solve this problem, this paper proposes a model called PGG-SP, which considers an individual’s preference in order to imitate the process of human strategy formation.
Research has shown that the combination of double-layer networks and game theory has important scientific value. For example, [42,43] investigated interdependent networks and found that they can facilitate cooperation under certain conditions. Nevertheless, the limitations of these studies are either the use of square lattices networks or two-player games, where square lattices networks lack some practical significance and more two-person games cannot explain common multi-person interaction phenomena, such as the preparation of public facilities, the treatment of environmental pollution, and so on. Thus, the conventional combination method of complex networks and game theory cannot helpfully describe some social events. To solve these problems, we propose an improved group game model with a double-layer network.

3. Model

In this section, we explain the PGG-SP model in detail. Firstly, we introduce the rules of payoff calculation and strategy learning. In addition, the method of network adjustment is proposed in Section 3.3. Subsequently, the process for the PGG-SP model with the network is introduced.

3.1. Calculation of Payoff

In this paper, a public goods game model [19] is used, where an individual has k neighbors and the individual participates in k + 1 rounds in a game, where a single game is centered on the individual in question and k rounds for each neighbor. In each round of the game, individuals only have two strategies: cooperation (C) or defection (D). In order to reduce the parameters, the cost contributed by a cooperator was set to 1, and the cost contributed by a defector was set to 0. After all individuals make a choice, the contribution in the public pool is multiplied by r and divided equally among all individuals [44].
If individual x is a cooperator in a game round, P C is the payoff of the cooperator. The calculation method of P C is shown as follows:
P C   =   K φ ( x ) ( r g K S g | K | 1 )
where φ ( x ) is all game groups (including self-centered games) that an individual x participates in, K represents one group of games, r is an enhancement factor [44], | K | is the total number of individuals, and g is the individual in K group. If the individual g chooses to cooperate, S g = 1 , otherwise, S g = 0 .
Similarly, if individual x is a defector in a game round, P D is the payoff of the defector. The calculation method of P D is shown as follows:
P D   =   K φ ( x ) ( r g K S g | K | )
In the single-layer network, the payoff of the layer is used to represent the payoff for the individual; however, in a double-layer network, the sum of the respective payoff of the corresponding nodes in the two layers is used to represent the total payoff of the individual in order to reflect the influence of multiple interactions between different layers for the individual.

3.2. Strategy Learning

In the evolution process, an individual in a single-layer network only learns strategy from the neighbors in the layer, while a double-layer network has a probability ω for learning strategy from the corresponding node of the other layer. The parameter ω is a different layer learning probability. If ω is large, it means that the probability of learning from the corresponding nodes of different layers is relatively large, but there is still the possibility of 1 ω in terms of choosing neighbor nodes of the same layer for learning. This parameter was set in order to represent an imitation behavior, as in real life. The parameter exists in a double-layer network. All individuals decide whether to update their strategies simultaneously.
Individuals are set to have a preference attribute b i , which means that the individual has a higher chance to choose cooperation when b i is too large. In addition, the individual does not determine the strategy before the game starts; however, it determines the choice during the game. Thus, this assumption is more in line with a real selection process, as people often have a tendency rather than a clear fixed choice to cooperate or defect before the game begins.
In a single-layer network, when individuals update strategies, they choose the neighbor node with the greatest payoff first and change their attribute b i according to the last round for the neighbor node strategy. Accordingly, the calculation method is shown as follows:
b i   =   Δ ( 1 b i )   i f   S g   =   1
b i   =   Δ b i   i f   S g   =   0
where S g is the strategy for the learned node and Δ is the strategy for the learned node and is an influence factor.
In the double-layer network, the nodes of one layer are first randomly selected. A random number is assigned for x, then x is compared with ω . If x ω , then the node selects the corresponding node of another layer for learning. Otherwise, the neighbor node with the highest payoff in the same layer is selected in accordance with the learning rules of the single-layer network. The learning formulae are the same in Equations (3) and (4). The pseudo-code of the process of the game is shown in Algorithm 1.

3.3. Network Adjustment

In the double-layer network considered here, the first layer network was specified as a dynamic network in order to explore the effect of the double-layer network. The pseudo-code for the update process of the dynamic network is shown in Algorithm 2. The evolution of the network structure is influenced by the individual payoff and further influences the relationships between individuals. When updating the network, firstly, individual i will select the neighbor A with the lowest payoff among all relationships. Secondly, it randomly selects a non-neighbor individual B as a candidate node. Individual B is derived from the neighbors of individual i’s neighbors. As shown in Equation (5) [39], individual i will have probability q to disconnect from individual A and reconnect with individual B. Accordingly, the probability q is calculated as follows:
q = 1 1 + e P A P B r / 2 k
where P A and P B are the payoffs for individuals A and B in the previous round of the game, respectively. k represents the chance of reconnecting individual B, where k = 0.1 quantifies the uncertainty during the process of the strategy learning [45]. This means that even if the payoff of individual B is far lower than that of individual A, there is still a certain probability of updating the network, where r is the enhancement factor of the dynamic network to adjust the parameters.
Algorithm 1: The process of the game
1.Define and initialize the related parameters: game rounds Ngame, number of individuals N, preference attributes bi of each individual, strategy Si of each individual, fraction of cooperators fc, etc.;
2.fort = 1 to Ngame do
3.for i = 1 to N do
4.  If random() ≤ bi then
5.   Si = 1;
6.end if
7.else
8.   Si = 0;
9.end
10.end
11. Calculate the payoff of all nodes by using Equations (1) and (2);
12. Calculate fraction of cooperators fc;
13.for i = 1 to N do
14.   Find the appropriate node j, where j is the neighbor node with the largest payoff in the same layer of the network, or the corresponding node in a different layer
15.  if Sj = 1 then
16.   Update bi of the node i by using Equation (3);
17.  end if
18.  else
19.   Update bi of the node i by using Equation (4);
20.  end
21.end
22.end
23.Return fc;
In addition, if there are multiple neighbors with the lowest payoff simultaneously, a neighbor is randomly selected as the disconnected node; however, if an individual has a relationship with all neighbors then the update operation is abandoned. It is especially necessary to pay attention to avoiding isolation points and multiple connections in the update process.
Algorithm 2: Updating process of the dynamic network
1.Define and initialize the related parameters: game rounds Ngame, number of individuals N, the probability of reconnection edge T, etc.;
2.fort = 1 to Ngame do
3.if random() ≤ T then
4.  for i = 1 to N do
5.   Find the neighbor node A with the smallest payoff of node i;
6.   Randomly obtain a non-neighbor node B of the individual i, which is a neighbor of i’s neighbor;
7.   Calculate q by using Equation (5);
8.   if r a n d o m ( ) q then
9.   Update the network: disconnect from the individual A and reconnect with the individual B;
10.   end if
11.   else
12.    Keep the network unchanged;
13.   end
14.  end
15.end if
16.end

3.4. The Process of the PGG-SP Model in the Network

Figure 3 shows a brief flow chart of the PGG-SP model in the network. Firstly, a network model is built, such as a double-layer Newman–Watts (D-NW) network or a double-layer Barabási–Albert (D-BA) network, where the nodes of the network represent individuals and the edges of the network represent relationships. Meanwhile, initial preference attribute values are assigned to the individuals. Secondly, the game is started. The strategies according to the preference attributes of individuals must be defined and the payoffs must be calculated according to Equations (1) and (2). Subsequently, strategies are learned according to Equations (3) and (4). Thirdly, when the network is a dynamic network, the network is updated according to Equation (5). Finally, game rounds are performed until a given round has reached a certain standard. Statistical results are then detailed at the end of the process.

4. Experiment and Analysis

We used the fraction of cooperators f c to represent the effect of cooperation, which refers to the proportion of the number of individuals who choose to cooperate in the network. f c = N C N , where N is the number of all individuals and N c is the number of individuals who choose to cooperate. In the following experiment, as in the previous study [34], all data points were averaged 10 times over many separate experiments, leading to very stable results being obtained. Each experiment was carried out for 5000 time steps. After the cooperators density reached a dynamic balance, the average calculation result was produced in the last 500 time steps. Table 1 shows a list of parameters. If there was no special introduction, initially, 50% of the cooperators and defectors were randomly allocated to each network. In addition, the default values of the parameters were used in the experiment.

4.1. PGG-SP on the Static Network

4.1.1. Comparison of the Single-layer and Double-layer Networks

A set of questions was designed to compare single-layer and double-layer networks. Figure 4 shows f c obtained by different approaches in terms of the single- and double-layer NW networks. We observe that f c increased with the increase in r. When r is small, the double-layer NW network had a higher f c value than the single-layer NW network. Moreover, the f c of the double-layer NW network increased with the increase in ω as the reward level of the group for cooperation is low if the r value is small. The information transmission ability of the double-layer network was higher than that of the single-layer network, such that the higher the probability of different layer learning, the stronger the influence between the layers of the double-layer network. Accordingly, cooperation in the network was more likely to emerge and be maintained, which resulted in the improvement of the level of cooperation. Thus, it can be seen that the double-layer NW network promotes cooperation; however, as r increases, when r > r x ( r x is a constant), the f c of the single-layer NW network gradually becomes higher than that of the double-layer NW network. In addition, the critical value r x increases with the increase in ω. It means that the larger ω is, the larger r is needed when the f c of the single-layer network exceeds the double-layer network. When r > r y ( r y is a constant), under the same r, the f c of the double-layer NW network decreases with the increase in ω. Through analysis, it is found that there are constants ( r x , r y ) that exist as critical values, and they change with different network parameters. Furthermore, with the aforementioned conditions, the cooperation growth rate becomes slower. Understandably, when a group’s reward level for cooperation is high enough, cooperative clusters are created in the network via the increase in r. Currently, the ω of the double-layer NW network becomes an interference factor. Moreover, it increases with the increase in ω, such that the “free rider” phenomenon appears and the growth rate of cooperation decreases.
Figure 5 shows f c for the single- and double-layer BA networks. It can be observed that the BA network’s ability to promote cooperation was higher than that of the NW network. Likewise, the various changes are similar to the NW network. When r was small (such as r   <   2.5 ), the double-layer BA network promoted greater cooperation than the single-layer BA network. Additionally, the f c of a network with a large ω is slightly higher under the same r value; however, the growth rate of f c decreases with the increase in ω. Even when ω is too large, the f c of the single-layer network exceeds the double-layer network with a larger r (such as ω = 0.9, r = 3). This is an interesting result. In a network with a relatively large r, a single-layer network can produce a sufficient cooperative effect. At this time, ω will hinder the spread of cooperation, so more and more “free riders” will appear. Accordingly, this phenomenon is similar in the NW network.

4.1.2. Exploring Cooperation in Double-layer Network

This section examines the impact of the double-layer network in terms of cooperation. Figure 6 shows the changes of f c in the double-layer NW network. From the graph we can see that when the r 1 + r 2 is a fixed value, is in a fixed range. Moreover, f c increases with the increase in r 1 + r 2 . It can be seen that the growth rate of f c decreases with the increase in ω. Accordingly, it can be seen that the critical value becomes larger when reaching basic full cooperation (Figure 6a,b).
From Figure 7, when r 1 is fixed, f c increases with the increase in r 2 , which is consistent with a larger r 1   +   r 2 and the larger f c shown in Figure 6. When r 2 is fixed, a network with a large ω (dashed line) has a higher f c than a network with a small ω (solid line) when r 1 is small; however, as r 1 increases, the f c of a network with a small ω gradually overtakes a network with a large ω. In addition, the critical r 1 value of the overtaking point decreases as r 2 increases. Even for some networks with large r 2 values (such as r 2 > 8 ), the f c of a network with a small ω is always higher than that of a network with a large ω. The following explains the situation of the group game in the double-layer NW network, combined with the change process of the cooperation levels shown in Figure 6 and Figure 7 with different learning probabilities for layers and enhancement factors for the two layers.
In the double-layer NW network, the larger the value of r 1   +   r 2 , the higher the reward for choosing cooperation. This leads to a higher cooperation level in the network. It can be known from the structure of a double-layer NW network that each layer NW network has the same characteristics of the small average path and the high clustering coefficient [16], so that as long as the sum of the enhancement factors of the two layers is consistent, the network cooperation level is fixed, showing a symmetrical status. In addition, the learning probability of different layers ω increases the degree of connection between the upper and lower layers in the game, so that the transmission rate of cooperation is increased between layers. Therefore, when the network with a large ω is compared with a network with a small ω, even if the enhancement factor is small, it is difficult to maintain the cooperative cluster within the layer; however, the transfer of information between the layers because of ω will help maintain cooperation. Thus, the structural characteristics of the double-layer network lead to a strong ability to promote cooperation so that the rate of cooperation increases rapidly; however, as the enhancement factor becomes larger, cooperative clusters within the network layer are easily formed, then the learning probability between the layers instead limits the improvement of the cooperation level. Therefore, under the larger enhancement factor, the network fraction of cooperators with a large ω is gradually smaller than that with a small ω due to the slow growth rate.
If we now consider the double-layer BA network shown in Figure 8, the fraction of cooperators also exhibits a symmetrical distribution. Nevertheless, the difference with the NW network is that although is larger, f c is higher; however, when the result of r 1   +   r 2 is the same, f c of the double-layer BA network with a larger value of | r 1 r 2 | is higher. Similar to the double-layer NW network, when r 1   +   r 2 is small, f c of a network with a large ω is slightly higher. Moreover, with the increase in r 1   +   r 2 , f c of a network with a small ω exceeds the network with a large ω.
The reason for the different performance of f c on the double-layer BA network with different ω in Figure 8 is linked to the double-layer NW network. The greater the value of r 1   +   r 2 , the more collaborators will exist by high rewards. ω improves the transmission between the two layers of networks. When the reward is small, a network with a large ω will promote cooperation more. On the contrary, a network with a small ω will help maintain cooperation. Next, the following will focus on analyzing the reasons for the higher f c on a network with a larger value of the | r 1 r 2 | when the r 1   +   r 2 is the same. It can be known from previous research that the scale-free network has a higher ability to promote cooperation than the small-world network [46]. We know that there are some nodes with high connectivity in the scale-free network due to the characteristics of the network, also known as hub nodes [17]. If the hub node in the BA network chooses to cooperate, the influence on its neighbor nodes is great. Therefore, in the double-layer BA network, if r of a layer network is large, even if r of the other layer is small, the network with a larger r value will play a leading role due to the existence of the learning probability between the layers. Thus, when is the same, the double-layer BA network with a larger | r 1 r 2 | has stronger promotion ability for cooperation.
To explore the impact of different network sizes on cooperation, it is observed from Figure 9 that whether it is a double-layer NW or a double-layer BA network, for a relatively small r, as ω increases, the network cooperation levels of different scales show an increasing trend; however, for a relatively large r, with the increase in ω, networks of various sizes have shown a state of declining cooperation rates. It shows that the existence of ω is of great help to the network with a small r; however, for a network with a relatively large r, a lower ω should be selected. In addition, it is seen that the relationship between ω, r, and f c has a certain degree of robustness to the network scale. Even under different network sizes, changes in ω and r of the same type of network have roughly the same effect on the fraction of cooperators. At the same time, in Figure 9a,b, f c of the double-layer NW network shows a decreasing characteristic according to the increase in N under the same conditions; however, the double-layer BA network does not have this feature because of the existence of hub nodes and the particularity of its structure.

4.1.3. Influence of the other Network Parameters on Cooperation

The experiment in Figure 10 measures the influence of other parameters in the network on PGG-SP. It is seen from Figure 10 that with the increase in the enhancement factor r, the double-layer networks of various network structures show an increase in the cooperation rate, which is consistent with the results of previous investigations; however, the NW network composed of the different p and the BA network composed of the different M 0 behave similarly in terms of cooperation when the other conditions are the same: f c decreases with the increasing p or M 0 for the same r. Because the increase in these parameters will directly cause the increase in the number of connected edges. Accordingly, the complexity of the network structure increases. With the increase in individual relationships in the network, it is easier for individuals to rely on other people, so the proportion of collaborators decreases.

4.1.4. Exploration of Coordinated Development of the Double-layer Network

In order to explore the cooperative effect of the double-layer network and investigate the driving effect of the network with high initial cooperation rate on the network with low cooperation rate, in this part, the following experiments set the initial cooperation rate of the double-layer network to be approximately 1 and 0. Subsequently experimenting by changing the respective enhancement factors of the two layers network.
It is observed from the double-layer NW network in Figure 11 that when the first layer cooperation rate is high and the enhancement factor is large, it can lead to the emergence of a second layer of network cooperation. Otherwise, the double-layer network will collapse. Similarly, it can be found for the change of strategy pairs in Figure 12 that only when the enhancement factor of the network with high cooperation rate is large enough, the number of nodes that choose to cooperate at the same time increases with the increase in the number of games. In addition, the individuals who defected will appear first and then decrease. In contrast, if the enhancement factor is not large enough, there will be a phenomenon of “full defect”. In either case, the strategies of the upper and lower nodes are gradually consistent because of ω’ s connection.
Like the state of the double-layer NW network, the double-layer BA network also shows the characteristics of coordinated development and strategy convergence in Figure 13 and Figure 14. In addition, it can be seen from Figure 13 that when r 1   +   r 2 is the same, the larger the value of | r 1 r 2 | , the higher the cooperation rate when the times of games is reached. Moreover, the overall process of cooperative evolution and the b i value of the node with the largest degree have the same tendency to increase or decline. Because the hub nodes have large degrees, they have a leading role in the BA network. Thus, a network with a high cooperation rate can drive a network with a low cooperation rate and move forward collaboratively.

4.2. PGG-SP on the Dynamic Network

4.2.1. Comparison of the Static and Dynamic Double-layer Networks

This chapter moves on to discuss the dynamic network. Figure 15 shows f c on the static and dynamic double-layer NW network with different ω and explores their differences. We observe that under the same ω and r, the density of cooperators in the dynamic double-layer NW network is always higher than that of the static double-layer NW network. Moreover, when the value of r is small, the network cooperation level with the large ω is higher; however, as r increases to a certain level, f c of the network with the small ω will exceed that of networks with the large ω. This feature is reflected in both static and dynamic double-layer networks. The reason for this phenomenon is similar; r represents the degree of collective reward and if the reward level is small, the communication between the two layers will promote the transmission of cooperation; however, if the reward level is large enough, the communication will become an influencing factor and hinder the creation and maintenance of cooperation. Accordingly, the dynamic double-layer network performs better than the static double-layer network, because the first layer network can change its connection status according to actual needs, which is similar to reality. People have the right to make “rich friends” and abandon “poor friends” so that they form a “rich club” with growing interest groups.

4.2.2. Comparison of the Dynamic Single-layer and Double-layer NW Networks

To compare the difference between dynamic single-layer and double-layer NW networks, we run PGG-SP on two networks separately. Figure 16 shows the changes of f c under different r in the dynamic single-layer network and double-layer network, respectively. It is found from Figure 16b that the game results of the dynamic double-layer NW network are approximately the same, in which f c can be distributed above or near the average value during each game; however, the results show dispersion and randomness in the dynamic single-layer NW network in Figure 16a, especially under the medium-sized enhancement factor, the dispersion of the evolutionary results is stronger. Comparing the two results, the evolution of PGG-SP on a dynamic double-layer network is more evenly distributed. In addition, under the same conditions, f c in a dynamic double-layer network is greater. In a single-layer NW network, the cooperators will quickly form clusters due to the existence of dynamic topology; however, there is still the possibility of defectors invading. If the cooperators do not form a stable cluster, they are still facing a crisis of erosion. On the contrary, in the double-layer NW network, the second layer network will increase the ability of the dynamic layer network to resist risks, thus it is more conducive to the formation and maintenance of stable clusters. In other words, the dynamic layer of the double-layer NW network improves the ability to generate cooperation, and the static layer improves the ability to maintain cooperation. In summary, these results provide important insights into the evolutionary stability of PGG-SP on dynamic double-layer networks.

4.2.3. Performance of the Dynamic Double-layer NW Network

The purpose of this experiment was to explore the influence of ω and r on the dynamic double-layer NW network. It is observed from Figure 17 that for a dynamic double-layer NW network, f c increases with the increase in ω if r is small; however, f c decreases with the increase in ω when r is large. This is a significant result. It is because when r is very small, the original single-layer network reward is very small, and the larger ω can communicate with the two layers and transfer cooperation quickly. But when r is large enough, a cluster of cooperators can be formed in the single-layer network, so a larger ω affects the divergence of cooperation. Accordingly, it is similar with the performance of a static double-layer NW network, which indicates that the existence of a learning probability ω may not only promote the production of cooperation, but also limit the divergence of cooperation.
The following analysis is used to express the synergy effect in the dynamic double-layer NW network. It can be seen from Figure 18 that the change of the first layer dynamic network change rate is greater than the second layer static network. For example, when the network faces the risk of full defections, it will accelerate the collapse of the network; however, when the network tends to comprehensively cooperate, the speed of network cooperation will be accelerated, which indicates that the dynamic network layer is more sensitive to numerous changes in the network. In Figure 19, when the network gradually collapses, the choices of the double-layer network tend to defect. On the contrary, when the enhancement factor is large enough, the double-layer network has an increasing tendency to choose cooperation at the same time. This reflects the synergy of the double-layer network, which is similar with the situation in reality, where individuals often make the same choices in different social or network relationships due to various attributes, personalities, and abilities.
In order to show the evolution of the game, we drew the spot diagram of the strategy distribution. In Figure 20, the cooperators and defectors of the network are evenly distributed in the network from the beginning. When the game is played 1000 times, the network, with a small enhancement factor r, begins to produce defectors surrounding the cooperators, while the network with a relatively large enhancement factor r can already show the embryonic form of a “cooperators cluster”. Over time, the network with a small enhancement factor will be completely swallowed by the defectors; however, a network with a large enhancement factor will form a stronger and larger “cooperators cluster”. At this time, only sporadic defectors are distributed around the network, so that it is no longer possible to form a piece of structure. The greater the enhancement factor, the greater and faster the chance of a defector being assimilated into a cooperator, and thus it is more likely to form “comprehensive cooperation”.
The purpose of Figure 21 is to show the edge changes when the double-layer network changes dynamically. As can be seen from Figure 21, when the enhancement factor is larger, the number of change edges in the dynamic layer network gradually decreases. Moreover, the greater the value of r, the faster the dynamic edges decrease. Finally, if the network tends to cooperation, the edges of the dynamic layer network will no longer change; however, if r is small, the number of change edges of the dynamic layer network gradually increases. In addition, the smaller the value of r is, the faster the dynamic edges growth rate, as even when the network has tended to full defection, the dynamic network layer is still trying to change its topological structure to support cooperation.

4.2.4. Simulating Real-world Dynamic Double-layer Networks

In order to explore real-world networks more effectively, we built a BA-NW double-layer network based on previous research. Strogatz [46] showed that the individual online network environments have scale-free characteristics, such as the World Wide Web, co-authorship networks of scientists, and so on. In addition, Milgram [47] introduced that small-world networks are useful for finding acquaintances for connection. Based on these results, we built a BA-NW double-layer network, where the first layer is a BA network and the second layer is a NW network. This model can effectively simulate the complex environment of humans living in modern society. In a network world, most individuals exist in a scale-free network that is connected at any time; however, in a practice scenario, communication between individuals often follows a strong “small-world characteristic”. When these concepts are merged, the entire social environment of the individual is constructed.
Figure 22 shows the influence of r and ω of the dynamic BA-NW network on cooperation. In this part, if the r of the dynamic BA-NW network increases in the game, the cooperation levels become stronger and stronger. When r is small, the network cooperation level with large ω is higher. Conversely, network cooperation with small ω is better. Because when r is small, a larger ω can be used as a communication medium to enhance the level of cooperation; however, if the reward level is large enough, ω can be reduced to avoid mutual influence, thereby ensuring the existence of a high cooperation rate.
In order to study the stability of the BA-NW network, we graphed and analyzed the results of multiple experiments. In Figure 23, when the enhancement factor r is at a medium size, the BA-NW network has poor stability and strong divergence; however, the BA-NW network performs relatively evenly with a lower or higher r, where it can form a stable “defectors cluster” or “cooperators cluster”. Consequently, it is similar to reality. If people are faced with a conflict between collective interests and individual interests, they will be relatively firm in their choices when faced with the lower or higher reward levels; however, people are usually in a state of vacillation and may change their choices at any time in the face of a medium reward.

5. Conclusions

This paper has studied how double-layer complex networks affect the evolution of cooperation in group games. First, a group game in the double-layer network structure was used to describe the multiple identities of human beings. Second, in order to make the experiment closer to people’s choices in real life, an improved game model called PGG-SP was proposed, so that individual preferences could become an important indicator of choice. Third, the co-evolution process of the strategy and the network was designed. In addition, the effect of a dynamic double-layer network structure on cooperation was explored.
In order to explain the double-layer network’s role in promoting cooperation in group games from multiple perspectives, this paper has used small-world networks and scale-free networks as the basis to compare the cooperation effects of single-layer networks and double-layer networks. We further explored the changes in cooperation when some parameters were different in the double-layer network, such as the learning probability, network size, and edge adding probability. Accordingly, the results show that the double-layer NW and BA networks both promote cooperative evolution with the PGG-SP model; however, unlike previous studies, where the fractions of cooperators have simply increased with the enhancement factor, the experimental results show that when the enhancement factor of the game is small, the double-layer network with a higher learning probability for different layers will more strongly promote cooperation. Furthermore, it has been found that one layer of the network in the double-layer network has a leading effect on the other layer, and that different layers in networks show the characteristics of coordinated development and the convergence of strategies. In addition, this paper has constructed a more realistic dynamic double-layer network structure. The dynamic double-layer NW network is more capable of generating and maintaining cooperation.
In the next work, it can be considered that improving the experimental model in order to better explain the many phenomena and raise the level of cooperation in society and nature will be of interest. Factors relating to this include adding critical factors of cooperation in the public goods games and constructing more levels for network structures, among others.

Author Contributions

Conceptualization, D.G. and M.F.; methodology, D.G. and M.F.; model design/experiment/analysis, D.G. and M.F.; writing—original draft preparation, M.F.; writing—review and editing, D.G., M.F. and H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not Applicable, the study does not report any data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Albert, R.; Barabási, A.L. Statistical mechanics of complex networks. Rev. Mod. Phys. 2001, 74, 47–97. [Google Scholar] [CrossRef] [Green Version]
  2. Li, X.; Yang, Y.; Chen, Y.; Niu, X. A Privacy Measurement Framework for Multiple Online Social Networks against Social Identity Linkage. Appl. Sci. 2018, 8, 1790. [Google Scholar] [CrossRef] [Green Version]
  3. Pilosof, S.; Porter, M.A.; Pascual, M.; Kefi, S. The multilayer nature of ecological networks. Nat. Ecol. Evol. 2017, 1, 1–9. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Rao, X.; Zhao, J.; Chen, Z.; Lin, F. Substitute Seed Nodes Mining Algorithms for Influence Maximization in Multi-Social Networks. Future Internet 2019, 11, 112. [Google Scholar] [CrossRef] [Green Version]
  5. Wang, C.; Wang, G.; Luo, X.; Li, H. Modeling rumor propagation and mitigation across multiple social networks. Phys. Stat. Mech. Appl. 2019, 535, 122240. [Google Scholar] [CrossRef]
  6. Jiang, C.; Zhu, X. Reinforcement Learning Based Capacity Management in Multi-Layer Satellite Networks. IEEE Trans. Wirel. Commun. 2020, 19, 4685–4699. [Google Scholar] [CrossRef]
  7. Guazzini, A.; Duradoni, M.; Lazzeri, A.; Gronchi, G. Simulating the Cost of Cooperation: A Recipe for Collaborative Problem-Solving. Future Internet 2018, 10, 55. [Google Scholar] [CrossRef] [Green Version]
  8. Fehr, E.; Fischbacher, U. The nature of human altruism. Nature 2003, 425, 785–791. [Google Scholar] [CrossRef]
  9. Moore, A.D.; Martin, S. Privacy, transparency, and the prisoner’s dilemma. Ethics Inf. Technol. 2020, 22, 211–222. [Google Scholar] [CrossRef]
  10. Ramazi, P.; Cao, M. Global Convergence for Replicator Dynamics of Repeated Snowdrift Games. IEEE Trans. Autom. Control. 2021, 66, 291–298. [Google Scholar] [CrossRef] [Green Version]
  11. Buyukboyaci, M. Risk attitudes and the stag-hunt game. Econ. Lett. 2014, 124, 323–325. [Google Scholar] [CrossRef]
  12. Zhang, J.; Li, Z.; Xu, Z.; Zhang, C. Evolutionary Dynamics of Strategies without Complete Information on Complex Networks. Asian J. Control 2018, 22, 362–372. [Google Scholar] [CrossRef]
  13. Lu, R.; Yu, W.; Lü, J.; Xue, A. Synchronization on Complex Networks of Networks. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 2110–2118. [Google Scholar] [CrossRef] [PubMed]
  14. Kibangou, A.Y.; Commault, C. Observability in Connected Strongly Regular Graphs and Distance Regular Graphs. IEEE Trans. Control. Netw. Syst. 2014, 1, 360–369. [Google Scholar] [CrossRef]
  15. Erdos, P.; Renyi, A. On the evolution of random graphs. Bull. Int. Stat. Inst. 1960, 38, 343–347. [Google Scholar]
  16. Watts, D.J.; Strogatz, S.H. Collective dynamics of ‘small-world’ networks. Nature 1998, 393, 440–442. [Google Scholar] [CrossRef] [PubMed]
  17. BArabási, A.L.; Albert, R. Emergence of Scaling in Random Network. Science 1999, 286, 509–512. [Google Scholar] [CrossRef] [Green Version]
  18. Newman, M.E.J.; Watts, D.J. Renormalization group analysis of the small-world network model. Phys. Lett. A 1999, 263, 341–346. [Google Scholar] [CrossRef] [Green Version]
  19. Galbiati, R.; Vertova, P. Obligations and cooperative behaviour in public good games. Games Econ. Behav. 2008, 64, 146–170. [Google Scholar] [CrossRef] [Green Version]
  20. Wang, Z.; Wang, L.; Szolnoki, A.; Perc, M. Evolutionary games on multilayer networks: A colloquium. Eur. Phys. J. B 2015, 88, 124. [Google Scholar] [CrossRef]
  21. Boccaletti, S.; Bianconi, G.; Criado, R.; del Genio, C.I.; Gómez-Gardeñes, J.; Romance, M.; Sendiña-Nadal, I.; Wang, Z.; Zanin, M. The structure and dynamics of multilayer networks. Phys. Rep. 2014, 544, 1–122. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Gulati, R.; Sytch, M.; Tatarynowicz, A. The rise and fall of small worlds: Exploring the dynamics of social structure. Organ. Sci. 2012, 23, 449–471. [Google Scholar] [CrossRef] [Green Version]
  23. Shen, C.; Chu, C.; Guo, H.; Shi, L.; Duan, J. Coevolution of Vertex Weights Resolves Social Dilemma in Spatial Networks. Sci. Rep. 2017, 7, 1–7. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Wang, L.; Xia, C.; Wang, J. Coevolution of network structure and cooperation in the public goods game. Phys. Scr. 2013, 87, 055001. [Google Scholar] [CrossRef]
  25. Li, Y.; Shen, B. The coevolution of partner switching and strategy updating in non-excludable public goods game. Phys. A 2013, 392, 4956–4965. [Google Scholar] [CrossRef]
  26. Liu, X.; Wang, S.; Ji, H. Double-layer P2P networks supporting semantic search and keeping scalability. Int. J. Commun. Syst. 2015, 27, 3956–3970. [Google Scholar] [CrossRef]
  27. Duh, M.; Gosak, M.; Slavinec, M.; Perc, M. Assortativity provides a narrow margin for enhanced cooperation on multilayer networks. New J. Phys. 2019, 21, 123016. [Google Scholar] [CrossRef]
  28. Nowak, M.A. Five Rules for the Evolution of Cooperation. Science 2006, 314, 1560–1563. [Google Scholar] [CrossRef] [Green Version]
  29. Hardin, G. The Tragedy of the Commons. Science 1969, 162, 1243–1248. [Google Scholar]
  30. Sinha, A.; Anastasopoulos, A. Distributed Mechanism Design with Learning Guarantees for Private and Public Goods Problems. IEEE Trans. Autom. Control 2020, 65, 4106–4121. [Google Scholar] [CrossRef]
  31. Fang, Y.; Benko, T.P.; Perc, M.; Xu, H.; Tan, Q. Synergistic third-party rewarding and punishment in the public goods game. Proc. R. Soc. A. 2019, 475, 20190349. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Zhang, B.; Cui, Z.; Yue, X. Cluster evolution in public goods game with fairness mechanism. Phys. A 2019, 532, 121796. [Google Scholar] [CrossRef]
  33. Zhang, S.; Zhang, Z.; Wu, Y.; Yan, M.; Xie, Y. Tolerance-based punishment and cooperation in spatial public goods game. Chaos Soliton Fract. 2018, 110, 267–272. [Google Scholar] [CrossRef]
  34. Quan, J.; Yang, W.; Li, X.; Wang, X.; Yang, J. Social exclusion with dynamic cost on the evolution of cooperation in spatial public goods games. Appl. Math. Comput. 2020, 372, 124994. [Google Scholar] [CrossRef]
  35. Li, K.; Cong, R.; Wu, T.; Wang, L. Social exclusion in finite populations. Phys. Rev. E 2015, 91, 042810. [Google Scholar] [CrossRef]
  36. Szolnoki, A.; Chen, X. Alliance formation with exclusion in the spatial public goods game. Phys. Rev. E 2017, 95, 052316. [Google Scholar] [CrossRef] [Green Version]
  37. İriş, D.; Lee, J.; Tavoni, A. Delegation and Public Pressure in a Threshold Public Goods Game. Environ. Resour. Econ. 2019, 74, 1331–1353. [Google Scholar] [CrossRef]
  38. Li, P.P.; Ke, J.; Lin, Z.; Hui, P.M. Cooperative behavior in evolutionary snowdrift games with the unconditional imitation rule on regular lattices. Phys. Rev. E 2012, 85, 021111. [Google Scholar] [CrossRef]
  39. Santos, F.C.; Pacheco, J.M. Scale-Free Networks Provide a Unifying Framework for the Emergence of Cooperation. Phys. Rev. Lett. 2005, 95, 098104. [Google Scholar] [CrossRef] [Green Version]
  40. Gyorgy, S.; Toke, C. Evolutionary prisoner’s dilemma game on a square lattice. Phys. Rev. E 1998, 58, 69–73. [Google Scholar]
  41. Sarkar, B. Moran-evolution of cooperation: From well-mixed to heterogeneous complex networks. Phys. A 2018, 497, 319–334. [Google Scholar] [CrossRef] [Green Version]
  42. Wang, Z.; Szolnoki, A.; Perc, M. Interdependent network reciprocity in evolutionary games. Sci. Rep. 2013, 3, 1183. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Nag Chowdhury, S.; Kundu, S.; Duh, M.; Perc, M.; Ghosh, D. Cooperation on Interdependent Networks by Means of Migration and Stochastic Imitation. Entropy 2020, 22, 485. [Google Scholar] [CrossRef] [PubMed]
  44. Szolnoki, A.; Perc, M. Reward and cooperation in the spatial public goods game. Europhys. Lett. EPL 2010, 92, 38003. [Google Scholar] [CrossRef] [Green Version]
  45. Szabó, G.; Fáth, G. Evolutionary games on graphs. Phys. Rep. 2007, 446, 97–216. [Google Scholar] [CrossRef] [Green Version]
  46. Strogatz, S. Exploring complex networks. Nature 2001, 410, 268–276. [Google Scholar] [CrossRef] [Green Version]
  47. Milgram, S. The small world problem. Psychol. Today 1967, 2, 60–67. [Google Scholar]
Figure 1. The conceptual diagram of the network.
Figure 1. The conceptual diagram of the network.
Futureinternet 13 00033 g001
Figure 2. Schematic diagram of a double-layer network.
Figure 2. Schematic diagram of a double-layer network.
Futureinternet 13 00033 g002
Figure 3. Flow chart of the public goods game with selection preferences (PGG-SP) model in a double-layer network.
Figure 3. Flow chart of the public goods game with selection preferences (PGG-SP) model in a double-layer network.
Futureinternet 13 00033 g003
Figure 4. fc versus r for the double-layer NW networks with different ω values and a single-layer NW network, p = 0.0005.
Figure 4. fc versus r for the double-layer NW networks with different ω values and a single-layer NW network, p = 0.0005.
Futureinternet 13 00033 g004
Figure 5. fc versus r with the double-layer BA network with different ω values and a single-layer BA network, M 0 = 1.
Figure 5. fc versus r with the double-layer BA network with different ω values and a single-layer BA network, M 0 = 1.
Futureinternet 13 00033 g005
Figure 6. Color maps where f c changes with the enhancement factors ( r 1 , r 2 ) in the double-layer NW network. The results were acquired with the combination of p = 0.0005, where (a) ω = 0.2 and (b) ω = 0.6.
Figure 6. Color maps where f c changes with the enhancement factors ( r 1 , r 2 ) in the double-layer NW network. The results were acquired with the combination of p = 0.0005, where (a) ω = 0.2 and (b) ω = 0.6.
Futureinternet 13 00033 g006
Figure 7. The relationship between r 1 and f c in a double-layer NW network with different ω and r 2 values. Solid line: ω = 0.2. Dashed line: ω = 0.6.
Figure 7. The relationship between r 1 and f c in a double-layer NW network with different ω and r 2 values. Solid line: ω = 0.2. Dashed line: ω = 0.6.
Futureinternet 13 00033 g007
Figure 8. Color maps where f c changes with the enhancement factors ( r 1 , r 2 ) in the double-layer BA network. The results were acquired with the combination of M 0 = 1, where (a) ω = 0.2 and (b) ω = 0.6.
Figure 8. Color maps where f c changes with the enhancement factors ( r 1 , r 2 ) in the double-layer BA network. The results were acquired with the combination of M 0 = 1, where (a) ω = 0.2 and (b) ω = 0.6.
Futureinternet 13 00033 g008
Figure 9. When the value of r is different, the f c of the double-layer network under different N varies with the ω, where N takes the value of 400, 900 and 1600. (a) and (b) are the double-layer NW network with p = 0.0005. (c) and (d) are the double-layer BA network with M 0 = 1. (a) r = 6, (b) r = 10, (c) r = 1, (d) r = 4.
Figure 9. When the value of r is different, the f c of the double-layer network under different N varies with the ω, where N takes the value of 400, 900 and 1600. (a) and (b) are the double-layer NW network with p = 0.0005. (c) and (d) are the double-layer BA network with M 0 = 1. (a) r = 6, (b) r = 10, (c) r = 1, (d) r = 4.
Futureinternet 13 00033 g009aFutureinternet 13 00033 g009b
Figure 10. The variation of f c with r; (a) the double-layer NW network with different p, (b) the double-layer BA network with different M 0 . The results were acquired with the combination of N = 1600 and ω = 0.2.
Figure 10. The variation of f c with r; (a) the double-layer NW network with different p, (b) the double-layer BA network with different M 0 . The results were acquired with the combination of N = 1600 and ω = 0.2.
Futureinternet 13 00033 g010
Figure 11. The evolution of the double-layer NW network in the game; the cooperation rate of the first layer network (black solid line), the cooperation rate of the second layer network (red solid line), the average cooperation rate of the network (blue solid line). The results were acquired with the combination of p = 0.0005 and ω = 0.2. (a) r 1 = r 2 = 6, (b) r 1 = 12, r 2 = 6.
Figure 11. The evolution of the double-layer NW network in the game; the cooperation rate of the first layer network (black solid line), the cooperation rate of the second layer network (red solid line), the average cooperation rate of the network (blue solid line). The results were acquired with the combination of p = 0.0005 and ω = 0.2. (a) r 1 = r 2 = 6, (b) r 1 = 12, r 2 = 6.
Futureinternet 13 00033 g011
Figure 12. The strategy evolution of the double-layer NW network in the game, where a pair of nodes simultaneously choose to cooperate (orange), or choose to defect (purple), or one cooperates but the other defect (green). The results were acquired with the combination of p = 0.0005 and ω = 0.2. (a) r 1 = r 2 = 6, (b) r 1 = 12, r 2 = 6.
Figure 12. The strategy evolution of the double-layer NW network in the game, where a pair of nodes simultaneously choose to cooperate (orange), or choose to defect (purple), or one cooperates but the other defect (green). The results were acquired with the combination of p = 0.0005 and ω = 0.2. (a) r 1 = r 2 = 6, (b) r 1 = 12, r 2 = 6.
Futureinternet 13 00033 g012
Figure 13. The evolution of the double-layer BA network in the game; the cooperation rate of the first layer network (black solid line), the cooperation rate of the second layer network (red solid line), the average cooperation rate of the network (blue solid line), the largest node b i of the first layer (green solid line), the largest node b i of the second layer (purple solid line). The results were acquired with the combination of M 0 = 1 and ω = 0.2. (a) r 1 = r 2 = 2, (b) r 1 = 3, r 2 = 1, (c) r 1 = 4, r 2 = 2.
Figure 13. The evolution of the double-layer BA network in the game; the cooperation rate of the first layer network (black solid line), the cooperation rate of the second layer network (red solid line), the average cooperation rate of the network (blue solid line), the largest node b i of the first layer (green solid line), the largest node b i of the second layer (purple solid line). The results were acquired with the combination of M 0 = 1 and ω = 0.2. (a) r 1 = r 2 = 2, (b) r 1 = 3, r 2 = 1, (c) r 1 = 4, r 2 = 2.
Futureinternet 13 00033 g013
Figure 14. The strategy evolution of the double-layer BA network in the game, where a pair of nodes simultaneously choose to cooperate (orange), or choose to defect (purple), or one cooperative but the other defect (green). The results were acquired with the combination of M 0 = 1 and ω = 0.2. (a) r 1 = r 2 = 2, (b) r 1 = 3, r 2 = 1, (c) r 1 = 4, r 2 = 2.
Figure 14. The strategy evolution of the double-layer BA network in the game, where a pair of nodes simultaneously choose to cooperate (orange), or choose to defect (purple), or one cooperative but the other defect (green). The results were acquired with the combination of M 0 = 1 and ω = 0.2. (a) r 1 = r 2 = 2, (b) r 1 = 3, r 2 = 1, (c) r 1 = 4, r 2 = 2.
Futureinternet 13 00033 g014
Figure 15. fc versus r on the static and dynamic double-layer NW networks with different ω under the same p = 0.0005.
Figure 15. fc versus r on the static and dynamic double-layer NW networks with different ω under the same p = 0.0005.
Futureinternet 13 00033 g015
Figure 16. When the value of r is different, (a) the dynamic single-layer NW network, (b) the dynamic double-layer NW network under the same scale for 20 operations, where p = 0.0005 and ω = 0.2.
Figure 16. When the value of r is different, (a) the dynamic single-layer NW network, (b) the dynamic double-layer NW network under the same scale for 20 operations, where p = 0.0005 and ω = 0.2.
Futureinternet 13 00033 g016
Figure 17. When the value of r is different, f c of the dynamic double-layer NW network varies with ω, where p = 0.0005, ω = 0.2.
Figure 17. When the value of r is different, f c of the dynamic double-layer NW network varies with ω, where p = 0.0005, ω = 0.2.
Futureinternet 13 00033 g017
Figure 18. The evolution of the dynamic double-layer NW network in the game, with the cooperation rate of the first layer network (black solid line) and the cooperation rate of the second layer network (red solid line). The results were acquired with the combination of p = 0.0005, ω = 0.2. (a) r = 4, (b) r = 7, (c) r = 10.
Figure 18. The evolution of the dynamic double-layer NW network in the game, with the cooperation rate of the first layer network (black solid line) and the cooperation rate of the second layer network (red solid line). The results were acquired with the combination of p = 0.0005, ω = 0.2. (a) r = 4, (b) r = 7, (c) r = 10.
Futureinternet 13 00033 g018
Figure 19. The strategy evolution of the dynamic double-layer NW network in the game, where a pair of nodes simultaneously choose to cooperate (orange), or to defect (purple), or one is cooperative but the other defects (green). The results were acquired with the combination of p = 0.0005 and ω = 0.2. (a) r = 4, (b) r = 7, (c) r = 10.
Figure 19. The strategy evolution of the dynamic double-layer NW network in the game, where a pair of nodes simultaneously choose to cooperate (orange), or to defect (purple), or one is cooperative but the other defects (green). The results were acquired with the combination of p = 0.0005 and ω = 0.2. (a) r = 4, (b) r = 7, (c) r = 10.
Futureinternet 13 00033 g019
Figure 20. Strategy distributions of the dynamic double-layer NW network under the same scale at 1, 1000, 3000, and 5000 game rounds when r is different. Blue spots denote defectors and the red spots denote cooperators. Results acquired with the combination of p = 0.0005, ω = 0.2, where (ad) are r = 4, (eh) are r = 7, and (il) are r = 10.
Figure 20. Strategy distributions of the dynamic double-layer NW network under the same scale at 1, 1000, 3000, and 5000 game rounds when r is different. Blue spots denote defectors and the red spots denote cooperators. Results acquired with the combination of p = 0.0005, ω = 0.2, where (ad) are r = 4, (eh) are r = 7, and (il) are r = 10.
Futureinternet 13 00033 g020aFutureinternet 13 00033 g020b
Figure 21. The number of change edges in the dynamic layer of a double-layer NW network varies with time when the value of r is different, where p = 0.0005 and ω = 0.2.
Figure 21. The number of change edges in the dynamic layer of a double-layer NW network varies with time when the value of r is different, where p = 0.0005 and ω = 0.2.
Futureinternet 13 00033 g021
Figure 22. The relationship between f c and r in the dynamic BA-NW network with different ω values in different layers, where p = 0.0001 and M 0 = 1.
Figure 22. The relationship between f c and r in the dynamic BA-NW network with different ω values in different layers, where p = 0.0001 and M 0 = 1.
Futureinternet 13 00033 g022
Figure 23. The distribution of the results of 20 operations of the dynamic BA-NW network with different r, where p = 0.0001 and M 0 = 1.
Figure 23. The distribution of the results of 20 operations of the dynamic BA-NW network with different r, where p = 0.0001 and M 0 = 1.
Futureinternet 13 00033 g023
Table 1. The list of parameters. NW: Newman–Watts. BA: Barabási–Albert.
Table 1. The list of parameters. NW: Newman–Watts. BA: Barabási–Albert.
Parameter MeaningDefault Value
Δ Influence factor0.01
kInfluence factor0.1
TThe probability of reconnection edge 0.5
NNetwork size900
pThe edge addition probability in the NW network
M 0 The number of edges added per time step in the BA network
ω Learning probability for different layers
rAverage enhancement factor
r 1 Enhancement factor of the first layer network
r 2 Enhancement factor of the second layer network
f c Fraction of cooperators
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Guo, D.; Fu, M.; Li, H. Cooperation in Social Dilemmas: A Group Game Model with Double-Layer Networks. Future Internet 2021, 13, 33. https://doi.org/10.3390/fi13020033

AMA Style

Guo D, Fu M, Li H. Cooperation in Social Dilemmas: A Group Game Model with Double-Layer Networks. Future Internet. 2021; 13(2):33. https://doi.org/10.3390/fi13020033

Chicago/Turabian Style

Guo, Dongwei, Mengmeng Fu, and Hai Li. 2021. "Cooperation in Social Dilemmas: A Group Game Model with Double-Layer Networks" Future Internet 13, no. 2: 33. https://doi.org/10.3390/fi13020033

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop