Next Article in Journal
A TOSCA-Based Conceptual Architecture to Support the Federation of Heterogeneous MSaaS Infrastructures
Next Article in Special Issue
Designing for the Metaverse: A Multidisciplinary Laboratory in the Industrial Design Program
Previous Article in Journal
Prediction of Energy Production Level in Large PV Plants through AUTO-Encoder Based Neural-Network (AUTO-NN) with Restricted Boltzmann Feature Extraction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Mobility-Aware Wireless Edge Cloud Support for the Metaverse

Center of Telecommunication Research, King’s College London, London WC2R 2LS, UK
*
Author to whom correspondence should be addressed.
Future Internet 2023, 15(2), 47; https://doi.org/10.3390/fi15020047
Submission received: 20 December 2022 / Revised: 23 January 2023 / Accepted: 24 January 2023 / Published: 26 January 2023

Abstract

:
Mobile-augmented-reality (MAR) applications extended into the metaverse could provide mixed and immersive experiences by amalgamating the virtual and physical worlds. However, the consideration of joining MAR and the metaverse requires reliable and high-quality support for foreground interactions and rich background content from these applications, which intensifies their consumption of energy, caching and computing resources. To tackle these challenges, a more flexible request assignment and resource allocation framework with more efficient processing are proposed in this paper through anchoring decomposed metaverse AR services at different edge nodes and proactively caching background metaverse region models embedded with target augmented-reality objects (AROs). Advanced terminals are also considered to further reduce service delays at an acceptable energy-consumption cost. We, then, propose and solve a joint-optimization problem which explicitly considers the balance between service delay and energy consumption under the constraints of perceived user quality in a mobility event. By also explicitly taking into account the capabilities of user terminals, the proposed optimized scheme is compared to a terminal-oblivious scheme. According to a wide set of numerical investigations, the proposed scheme has wide-ranging advantages in service latency and energy efficiency over other nominal baseline schemes which neglect the capabilities of terminals, user physical mobility, service decomposition and the inherent multimodality of the metaverse MAR service.

1. Introduction

Recently, the metaverse, which could be described as an endless virtual world where users interact with their avatars, has become popular in both academic and commercial areas [1]. Augmented reality manages to combine digital information with the real world for real-time presentation, and such experiences can be regarded as a continuum ranging from assisted reality to mixed reality, according to the level of local presence [2,3]. Mobile augmented reality (MAR), which further provides artificial perceptual information to augment the physical world during a mobility event, could be extended and enhanced in the wireless edge-supported metaverse with today’s available technologies, such as digital twin and head-mounted display rendering [4,5]. In addition, compared to existing MAR applications, users could seamlessly mix their experience of the metaverse and physical world through various metaverse MAR applications, such as massively multiplayer online video games and virtual concerts [4]. Users equipped with MAR devices can upload and analyze their environment through AR customization to achieve appropriate AR objects (AROs) and access the metaverse in mobile edge networks [6]. AR marketing is regarded as having potential in metaverse, and can be seen as a typical example of a more forward-looking metaverse AR application because it replaces physical products with AR holograms and enables direct foreground interactions between the customer and the digital-marketing application interface in the background environment [2,7]. Rendering three-dimensional (3D) AROs with the background virtual environment and updating in the metaverse consume significant amounts of energy and could be highly demanding in terms of required caching and computing resources [6,8]. Hence, such applications are delay- and energy-sensitive and face challenges in ensuring the quality of user experience and providing reliable and timely interactions with the metaverse [4,6].
Generally speaking, a metaverse scene will, in essence, consist of a background view as well as many objects in foreground interactions. The background view at a defined amalgamated virtual and physical location can be deemed as static or slowly changing [6,9]. A typical background scene can be the 3D model of the metaverse, a presentation of a related background virtual environment based on a certain user viewport [9,10]. Its size can reach tens of MB and the corresponding complexity of rendering related functionalities measured by computation load is also large (e.g., 10 CPU cycles/bit) [9,11]. On the other hand, objects (such as, for example, avatars) in foreground interactions which are embedded in the metaverse scene change much more frequently; however, they are significantly less complex than the background scene (e.g., 4 CPU cycles/bit) [8,9]. Howeve, even though those objects are less complex than the background scene, due to their frequent changes they also require rendering in a timely manner to avoid a considerable degradation in the quality of user experience. Thus, in this paper, rendering for both foreground and background are deployed at the edge clouds (ECs) rather than only at the terminals, to make full use of caching and computing resources. Notice that uploaded information is focused in foreground interactions, while background content checking consumes not only computing resources but also a significant amount of local cache to match and integrate AROs and related models of the metaverse. Hence, similar to our previous work in [12], the metaverse MAR application could also be decomposed into computational- and storage-intensive functions, which serve as a chain for improved assignment and resource allocation.
The general work flow of a metaverse MAR application supported by ECs is shown in Figure 1. A metaverse region is a fraction of the complete metaverse and is assumed to be located on a server geographically close to its corresponding service region in the mobile network (not necessarily running within an EC). The metaverse MAR service could be triggered by certain behavior including foreground interactions [6,9,13]. Then, content related to background content, such as, for example, pre-cached 3D models and AROs, are first searched in the EC cache to check if they are what the user requires. If the target AROs or model information cannot be found in the cache, then this case is labelled as a “cache miss” and the request is redirected to the original metaverse region stored in a cloud deeper in the network. Finally, according to the user’s physical mobility and virtual orientation extracted from foreground interactions, the matched AROs and model are integrated into a final frame and transmitted back to the user [6,9]. At the same time, updated information is also sent to the metaverse region for synchronization. Thus, the user could be aware of changes caused by other participants if they share the same metaverse region during the service. Based on the above discussion, it is becoming apparent that the overall quality of metaverse MAR applications depends on communication delays and the capabilities of the above-mentioned network entities, which participate in service creation.
Figure 2 further reveals the difference between cases with consideration, or not, of user mobility with service decomposition to render requirements by metaverse MAR applications. Clearly, when neglecting user mobility and service decomposition, as shown in case (a), models, target AROs and metaverse MAR applications are all cached as close as possible to the user’s initial location. This might leave a heavy burden for the adjacent server when there are multiple users at the same cell and cause a “hot-spot” area [12]. However, when user mobility and service decomposition are enabled, as shown in case (b), decomposed functions can spread over ECs between user’s initial location and potential destination. Hence, the service delivery becomes more flexible and efficient in terms of assigning requests and allocating network resources. In case (a), although user A only needs wireless communication in the initial location, it takes two hops after moving to the middle cell. However, when taking mobility and local resources into consideration, the same user, A, in case (b) could experience a shorter delay in the after-mobility event by allowing two more hops before the mobility event. Hence, in a high-mobility scenario, it might not always be ideal to allocate requests and services as close as possible to the user’s initial location. As shown in the figure for users A and B in case (b), the AR contents in the model might be similar in terms of the viewport of different users. Hence, participating users should be aware of each other’s updates and could share rendering functions to reduce the consumed resources. In this paper, we apply structural similarity (SSIM), proposed by [14], for user perception experience. It is a widely accepted method which measures the user perception quality of an image by comparing to its original version [14]. Caching more models and AROs also causes more processing and transmission delays, with energy consumption [4,6]. Hence, the joint optimization has to accept some potential loss due to constraints of computing and storage resources.
Figure 3 further reveals the difference among cases which allocate MAR service on either terminals or ECs. Note that when neglecting the EC support and service decomposition, the whole MAR application should run on the terminal and could be a heavy burden; this is shown in case (i). According to [6,15], MAR applications on terminals take up around 47% of the available power consumption and could also affect the performance of other (potentially critical) functionalities on the terminal when this service becomes more complex, such as in the metaverse. In case (ii), the processing time is significantly reduced through enabling EC support with the cost of an increased transmission delay [16]. Although the terminals are still limited in computing resources and more sensitive to energy consumption, neglecting their capabilities is not an optimal configuration, especially when recent technical improvements showcase their computing and caching potential. Noticing that the foreground interactions are much less complex than background scenes and are more suitable to terminals, we further consider terminals in the scenario, as shown by case (iii), and execute only computationally intensive functions. Hereafter, the optimization of the integration of terminals and ECs becomes the scheme OptimT and is compared to the previous case, (ii), which neglects the terminals (OptimNT). To maintain a fair comparison and focus on energy consumption and service latency, the overall energy consumption is measured explicitly (in Joules) instead of the power, as in [16], and the user perception quality is accepted as a given boundary.
In this paper, by explicitly considering the terminals, user mobility, service decomposition and models of metaverse regions with embedded AROs, a joint optimization framework (OptimT) is constructed for the metaverse MAR application in the edge-supported network. The proposed optimization framework seeks a balance between energy consumption and service delay under a given level of user perception quality. To reveal the influence of capacity and cost of terminals on the metaverse MAR application, this OptimT scheme is further compared with another optimized framework (OptimNT) proposed in our previous work [16], which only focuses on ECs and neglects terminals.

2. Related Work

Hereafter, a series of closely related works in the area edge/cloud support of metaverse-type applications over 5G and beyond wireless networks are discussed and compared with the approach proposed in this paper.
Noticing that, despite their limitations, MAR terminals have witnessed a series of technical improvements, their joint utilization with ECs for network optimization can bring significant benefits. In [17], the authors aim to minimize the energy consumption of multicore smart devices, which are usually applied for AR applications. Through tracking the response process of an AR application, they manage to measure the terminal’s energy consumption using Amdahl’s law. Although the law and a similar framework for the terminal energy consumption are also applied in this paper, we consider a broader use-case scenario which includes edge servers and a more complex balance between energy and latency. The work in [18] shares a similar target to this paper, which relates to achieving a balance between latency and energy consumption under a level of acceptable image quality. However, ref. [18] brings in local sensors to MAR devices for recognizing and tracking AROs, so that their scheme can realize selective local visual tracking (optical flow) and selective image offloading. The object-recognition stage is more focused in [18], with four AR applications requiring different types of AROs. Without support from ECs, they utilize a further cloud server as a heavy database for 3D AROs and leave most tasks to the terminals, whilst cloud offloading is only triggered when confronted with a calibration. However, in this work, we consider an EC-supported network and compare the difference between schemes explicitly utilizing the MAR terminals, or not. In [19], the energy efficiency is optimized under required service latency for MAR in an EC-supported network. Similarly, the authors consider proactively caching and propose a tradeoff between energy and latency in terms of cache size. However, their mobile cache and power-management scheme still focuses the energy consumption on terminals. Clearly, the above-mentioned works do not explicitly consider user mobility, perception quality, service decomposition and metaverse application features, like ours.
The problem of efficient resource allocation for supporting metaverse-type applications is starting to attract significant amount of attention and a plethora of aspects have already been considered. In [20], the emphasis is placed on the synchronization of Internet-of-Things services, in which they employ IoT devices to collect real-world data for virtual service providers. Through calculating maximum awards, users can select the ideal virtual service provider. Researchers then propose the game framework, which considers such a reward allocation scheme and general metaverse sensing model [20]. In [21], the authors also adopt a game theoretical framework by considering tasks offloading between mobile devices based on coded distributed computing in a proposed vehicular metaverse environment. Another framework, proposed by [22], manages and allocates different types of metaverse applications so that common resources among them can be shared through a semi-Markov decision process and an optimal admission-control scheme. The work in [23] applies a set of proposed resource-optimization schemes in a virtual education metaverse. More specifically, a stochastic optimal resource-allocation scheme is developed with the aim of reducing the overall cost incurred by a service provider. Similar to the service decomposition in this paper, they only upload and cache some parts of the data or services, to achieve reduced levels of delay and offer better privacy [23]. The work in [5] is closely related, since, in that paper, not only latency but also energy consumption is considered, as is the case for our proposed model, which uses a multi-objective optimization approach. For ultra-reliable and low-latency communication services, researchers bring in digital twins and deploy a mobility management entity for each access point, to determine probabilities of resource allocation and data offloading [5]. Then, by applying a deep-learning neuronetwork, the proposed scheme tries to identify a suitable user association and an optimized resource-allocation scheme for this association. However, in this paper, the core idea is to decompose the service and allow a flexible allocation across edge clouds by taking also into account user mobility. The work in [4] considers virtual-reality applications in the metaverse and regards the service delivery as a series of events in the market, in which users are buyers and service provides are sellers. Hence, they apply double-dutch auction to achieve a common price through asynchronous and iterative bidding stages [4]. They emphasize the quality of user perception experience by structural similarity (SSIM) and video multi-method assessment fusion. In our proposed framework, we also utilize the SSIM metric to determine the frame quality after integrating background-scene and AR contents [4]. The work in [4], further, brings in a deep reinforcement-learning-based auctioneer to reduce the information-exchange cost. While, in this paper, a multi-objective optimization approach is adopted, where we aim to balance different objective functions using the scalarization method, whilst considering the inherent user mobility in an explicit manner.

3. System Model

3.1. Multirendering in Metaverse AR

For a given wireless network topology with the set M = { 1 , 2 , , M } , we denote the available locations, including available edge clouds and terminals. Assuming that each user makes a single request, the corresponding MAR service requests defined by r R in the metaverse region generated by mobile users could be equipped with MAR devices. Request r emerges from network location f ( r ) . It represents the initial access router, to which this user is firstly connected. For each request, the user terminal could also be viewed as a valid location to cache or to process information locally. Thus, we define with j r M the terminal sending the request r. The terminals are brought into consideration in the following formulation. Defining the location with constraint j M , j j r could only enable the ECs. Clearly, for the OptimNT scheme, this works for all locations. However, in the following formulation for the OptimT scheme, we force the storage-intensive functions to be executed only on the ECs, while the computing-intensive ones could also be hosted at the end terminals. During the mobility event, a user could move to different potential destinations k K (i.e., changing of the anchoring point). Hereafter, and without loss of generality, we only accept adjacent access routers as available destinations in the mobility event. A series of metaverse regions are set on ECs to interact with users. The corresponding metaverse region serving the user can be found through functions A ( f ( r ) ) , A ( k ) . As explained earlier, each metaverse region is pre-deployed on a server close to the mobile network and its distance to an EC is also predefined. In this paper, as already suggested, a set of AROs is assumed to be embedded across the different background metaverse region models and is defined as N = { 1 , 2 , , N } . A set S r = { 1 , 2 , , S } is defined for the multiple rendering of the available metaverse region model to each user. Thus, we denote the decision variable p s j for pre-caching a metaverse region model s S r at the EC j ( j M , j j r ). The subset L r s represents the target AROs required by the user r in the related model s S r and the size of each target ARO l L r s is denoted as O l . Lastly, the decision variable h r l s is brought in for proactively caching an ARO required by a request, r. Based on the above, the decision variables p s j and h r l s can be defined as follows
p j = 1 , if rendering the related model s at node j , 0 , otherwise .
h r l s = 1 , if ARO l required by request r embedded in the model s is cached , 0 , otherwise .
Furthermore, the additional set of constraints needs to be satisfied,
r R h r l s 1 , j M , j j r , s S r , l L rs
s S r l L rs h r l s 1 , r R
j M , j j r p s j h r l s , r R , s S r , l L rs
h r l s h r l s j M , j J r p s j , r R , s S r , l L rs
Constraints in (3) force each ARO to be pre-cached at most once in a related model. Constraints (4) ensure that a valid request consists of at least one model and an embedded ARO. Constraints in (5) guarantee that the allocation of an ARO happens in conjunction with the decision to undertake proactive caching, whilst constraints in (6) further certify that the rejection of the model’s proactive caching causes any ARO planned to be embedded in this model to also not be pre-cached. Thus, (5) only accepts an ARO in an pre-cached model and (6) rejects all related ones when failing to cache a model, which, together, can ensure the model and corresponding AROs cannot be handled separately during the formulation.

3.2. Wireless Resource Allocation and Channel Model

With B j , we express the bandwidth of the resource block and γ r j denotes the signal to interference plus noise ratio (SINR) of the user r at node j. With P r j t r a n , we denote the transmit power of user r at node j, and P i is the transmission power at the base station. Furthermore, H r j is the channel gain, N j is the noise power and a is the path loss exponent, whilst d r j is the distance between the user and the base station. Finally, a nominal Rayleigh fading channel is used to capture the channel between the base stations and the users [24]. More specifically, the channel gain H r j can be written as follows [25],
H r j = 1 2 ( t + t J )
where J 2 = 1 , t and t are random variables following the standard normal distribution. Based on the above, the SINR γ r j can be expressed as follows [25,26],
γ r j = P r j t r a n H r j 2 d r j a N j + i M , i j P i H r i 2 d r i a
The data rate is denoted as g G and the binary decision variable e r g decides whether to select the data rate g for user r,
e r g = 1 , if data rate g is selected for user r , 0 , otherwise .
Noticing that the chosen data rate can also be written as B j log 2 ( 1 + γ r j ) , after choosing a data rate as g e r g for the user, the transmit power P r j t r a n can be written as follows,
P r j t r a n = N j + i M , i j P i H r i 2 d r i a H r j 2 d r j a ( 2 g e r g B j 1 )
Note that 2 g e r g B j = ( 1 e r g ) + e r g 2 g B j and should satisfy the following constraint to ensure that a single data rate is selected per user,
g G e r g = 1 , r R

3.3. Latency, Energy Consumption and Quality of Perception Experience

Similar to our previous work in [12], the MAR service can be decomposed into computational-intensive and storage-intensive functionalities and are defined as η and ϱ , respectively. For these functionalities, their corresponding execution locations are then denoted as x r i and y r i , respectively [12]. In a mobility event, the user’s moving probability from the starting location to an allowable destination can be known to mobile operators through learning from the historical data and, hence, is defined as u f ( r ) k [ 0 , 1 ] ( { f ( r ) , k } M ). The size of foreground interactions is denoted as F η r f o r e , the size of pointers used for matching AROs is denoted as F ϱ r , and the size of the related model s used for background content checking is F s r b a c k [9,12]. During the matching and background-content validation process, the target AROs and/or the background content are possibly not pre-cached in the local cache, and such a case is known as a “cache miss” (otherwise there is a “cache hit”). A cache miss in the local cache inevitably triggers the redirection of the request to the metaverse region stored in a core cloud deeper in the network and this extra cost in latency is defined as the penalty D. After rendering, the model and target AROs are integrated into a compressed final frame for transmission and its compressed size is denoted as F s r r e s .
In this section, a joint optimization scheme is proposed which aims to balance the service delay and the energy consumption under the constraint of the user perception quality of the decomposed metaverse AR services in the EC supported network. The cache hit/miss is expressed by the decision variable z r j and can be written as follows,
z r j = 1 , if l L rs s S r p s j h r l s L r s , 0 , otherwise .
The cache capacity of an EC and the cache hit/miss relation can be written as follows,
r R l L rs s S r p s j h r l s O l Θ j , j M , j j r
l N s S r h r l s + ϵ L r s + U ( 1 q r j ) j M , j j r , r R
where Θ j denotes the cache available memory at node j. In (14), to transfer the either-or constraint (i.e., l N s S r h r l s < L r s or z r j = 1 ) into inequality equations, we bring in ϵ as a small tolerance value, U as a large arbitrary number and q r j as a new decision variable satisfying 1 q r j = z r j [12]. Undoubtedly, increased levels of pro-caching decisions related to the background models and embedded AROs in a request inevitably brings about an extra execution burden for the matching function. Taking the above into account, the actual processing delay of the computational-intensive function can be expressed as follows,
V r j = ω η F η r f o r e f V j
Similarly, the processing delay of the matching and background-content checking function is assumed to happen only at servers, and can be written as
W r j = ω ϱ ( F ϱ r + l L rs s S r p s j h r l s O l + s S r F s r b a c k p s j ) f V j
where ω η and ω ϱ (cycles/bit) represent the computation load of foreground interaction and background matching, f V j is the virtual CPU frequency (cycles/sec), and F ϱ r is the size of uploaded pointers of AROs in foreground interactions [9,12]. When finding the target AROs during matching, their pointers, included in foreground interactions, should also be transferred to the metaverse for updating. Finally, the final frame integrating the model and target AROs are transmitted back to the user. Hence, the overall transmission delay for each user after processing using the functions can be written as
s S r j M , j j r ( C j A ( f ( r ) ) + C j A ( k ) ) p s j + ( C A ( f ( r ) ) f ( r ) + k K C A ( k ) k u f ( r ) k )
Note that the product of decision variables, p s j h r l s , p s j y r j and p s j h r l s y r j , creates a non-linearity. Observe that p s j h r l s and p s j y r j appear directly while the product p s j h r l s y r j appears in W r j y r j , which represents the execution of the matching function at the location j ( j M , j j r ). To express the optimization problem in a nominal linear programming setting, we linearize the above expressions via new auxiliary decision variables. To this end, a decision variable α r s j is introduced as α r s j = p s j y r j and the constraints should be added, as follows
α r s j p s j , α r s j y r j , α r s j p s j + y r j 1
Similarly, a new decision variable β r s l j is introduced as β r s l j = p s j h r l s and the constraints should be added as follows
β r s l j p s j , β r s l j h r l s , β r s l j p s j + h r l s 1
The constraints in (6) can be rewritten as follows,
h r l s j M β r s l j , r R , s S r , l L rs
In addition, it is worth pointing out that for the decision variable p s j , the following holds: p s j = p s j 2 . Therefore, we have p s j h r l s y r j = α r s j β r s l j . Hence, a decision variable λ r s l j is defined as λ r s l j = α r s j β r s l j and the following set of constraints are added
λ r s l j α r s j , λ r s l j β r s l j , λ r s l j α r s j + β r s l j 1
Hence, the product W r j y r j can be rewritten as follows,
ω ϱ ( F ϱ r y r j + l L rs s S r λ r s l j O l + s S r F s r b a c k α r s j ) f V j
By checking whether users share the same metaverse region by A ( f ( t ) ) = A ( f ( r ) ) , { t , r } R , we can ensure the user could also view other updates happening in the same metaverse region. Based on the previous modelling of the wireless channel, the wireless transmission delay in a mobility event can be written as follows,
r R F η r f o r e + t R , A ( f ( t ) ) = A ( f ( r ) ) s S r p s j F s t r e s g e r g + r R k K u f ( r ) k F η r f o r e + t R , A ( t ) = A ( k ) s S r p s j F s t r e s g e r g
Noticing that with constraint (11), 1 e r g can be replaced with e r g for linearization by introducing a new decision variable ϕ r l s g with the following constraints,
ϕ r s g e r g , ϕ r s g p s j , ϕ r s g e r g + p s j 1
Thus, the previous Formula (23) can be updated as follows,
1 g r R ( 1 + k K u f ( r ) k ) ( F η r f o r e e r g + t R , A ( f ( t ) ) = A ( f ( r ) ) s S r ϕ r s g F s t r e s )
Based on the above derivations and in-line with [12], the overall latency can be written as follows,
L = ( 25 ) + r R i M ( C f ( r ) i + V r i ) x r i + r R i M j M , j j r ( ( 22 ) + C i j ξ r i j + C A ( f ( r ) ) f ( r ) + ψ r j D ) + s S r j M , j j r ( C j A ( f ( r ) ) + C j A ( k ) ) p s j + r R k K ( C A ( k ) k + C k i x r i ) u f ( r ) k
where V r i is the processing delay of a computational-intensive function [12]. L m a x , here, denotes the maximum allowed service latency and has L L m a x [ 0 , 1 ] .
The energy efficiency of the system during each service time slot is measured by the production of its total power and running time. The server total power consists of the transmission power and CPU processing power at target ECs. Denote the required CPU processing power of the user r at the node j as P r j c p u and the CPU-chip architecture coefficient as k 0 (e.g., 10 18 ) [5]. Then, the power at the EC can be achieved through k 0 ( f V j ) 2 (J/s), based on measurements in [27,28]. Noticing that for both conditions, the background contents are processed at servers, the consumed processing time of the server is
T c p u = r R j M , j m r V r j x r j + r R j M W r j y r j
The wireless transmission happens regardless of whether foreground interactions are being executed at terminals. Since the selected data rate is g e r g , the wireless transmission time is
T t r a n = r R j M , j j r F η r f o r e g e r g + r R j M , j = j r F ϱ r g e r g
Finally, the total consumed energy at the server side can be written as follows,
E s e r v e r = r R j M ( P r j t r a n T t r a n + P r j c p u T c p u ) = r R j M ( N j + i M , i j P i H r i 2 d r i a H r j 2 d r j a ( 2 g e r g B j 1 ) ) ( j j r F η r f o r e g e r g + j = m r F ϱ r g e r g ) + r R j M k 0 ( f V j ) 2 ( W r j y r j + j j r V r j x r j )
Regarding the terminal side, in this paper, we follow the Amdahl’s law to model the energy consumption, in which the potential speedup of potential parallel computations is considered in the function [29]. In this paper, the metaverse AR functions work by serial and, for simplicity, the parallel portion is assumed to be zero like in [17]. As mentioned earlier, the dynamic foreground interactions, including some highly used AROs, could be proactively cached at terminals with the matching function. The 3D background model, on the other hand, is much larger and might serve multiple users in a region. Hence, it is not recommended to store or process this at the terminal. In addition, the metaverse application should not exceed a certain portion of the whole terminal CPU resources, so that other functionalities can work properly [17]. Denoting the consumed portion as Γ r [ 30 % , 50 % ] [17], then, V r j x r j Γ r means processing metaverse AR functions at the terminal requires a longer time. The energy consumption of terminals can be written as follows,
E t e r m i n a l = P t e r m i n a l T t e r m i n a l = r R j = j r k 0 ( f V j ) 2 V r j x r j Γ r
Finally, the overall system energy consumption is E = E s e r v e r + E t e r m i n a l . E m a x represents the maximum possible energy consumption of the system. It also has E E m a x [ 0 , 1 ] .
SSIM is applied to reveal the quality of perception experience. In this paper, the video coding scheme (e.g., H.264) and frame resolution (e.g., 1280 × 720) are assumed as pre-defined [10]. Then, SSIM is mainly affected by data rate and a concave function can be applied to reveal the relation between them [10]. Hence, the set of SSIM values for each ARO under corresponding data rates can be denoted as SSIM l , l L r . The overall quality of perception experience, Q, can be written as follows,
Q = r R l L r g G c SSIM l e r g c
To maintain the user experience above an acceptable level, the perception-quality constraint can be added, as follows,
Q Q m a x Q b o u n d
where Q m a x is the maximum quality through selecting max allowable data rate and storing as many AROs as possible.
Using a weighting parameter μ [ 0 , 1 ] , the bi-opbjective optimization problem can be written as follows,
m i n μ L L m a x + ( 1 μ ) E E m a x
s . t . z r j = 1 q r j , j M , r R
r R ( x r j + y r j ) Δ j , j M , j j r
j M x r j = 1 , r R
j M , j j r y r j = 1 , r R
ξ r i j x r i , r R , i , j M , j j r
ξ r i j y r j , r R , i , j M , j j r
ξ r i j x r i + y r j 1 , r R , i , j M , j j r
ψ r j z r j , r R , j M , j j r
ψ r j y r j , r R , j M , j j r
ψ r j z r j + y r j 1 , r R , j M , j j r x r j , y r j , p s j , h r l s , z r j , q j { 0 , 1 } , α r s j , β r s l j , λ r s l j , ϕ r s l g , ψ r j , ξ r i j { 0 , 1 } ,
r R , j M , l L rs , s S r ( 3 ) , ( 4 ) , ( 5 ) , ( 20 ) , ( 11 ) , ( 13 ) , ( 14 ) , ( 18 ) , ( 19 ) , ( 21 ) , ( 24 ) , ( 32 )
As mentioned earlier, any assignment relating to the storage-intensive functions ( y r j ) is limited to only ECs and, hence, should apply the constraint j j r . The constraint (33b), together with constraints (3) to (20), express the interrelationship between the pre-caching decisions and the cache miss/hit for each request [12]. The constraint (33c) reflect the limitation of a virtual machine, whilst constraints (33d) and (33e) guarantee that each function should only be executed once at a single server [12]. Finally, the constraints (18) to (21) and (33f) to (33k) relate to the auxiliary variables that have been used for linearization.

4. Numerical Investigations

In this section, the effectiveness of the proposed optimization scheme, which is referred to as Optim in the following, is investigated and is compared with a number of nominal (baseline) mechanisms.
A nominal tree-like network topology, as shown in Figure 4, was applied with 20 ECs in total, 6 ECs being activated for the current metaverse AR service and 30 requests being sent by MAR devices. The remaining available resources allocated for metaverse AR support within an EC were assumed to be CPUs with frequencies of 4 to 8 GHz, CPU-chip architecture coefficient of 10 18 (affected by the chip’s design and structure) [5], 4 to 8 cores and [ 100 , 400 ] MBytes of cache memory [12]. Similarly, the mobiles AR devices were assumed to have a CPU with 1 GHz frequency, 4 cores and [ 0 , 100 ] M B y t e s available cache memory for metaverse AR applications [17]. According to [15], the power of AR applications should not exceed 50% of the mobile device’s CPU total power (2–3 W), so that other functionalities can operate efficiently. Hereafter, a nominal frame rate of 15 frames/second is assumed and the rendering takes place every other frame (33.2 ms interval) [30,31,32]. Thus, the service delay of the aforementioned work flow within the above time interval could be regarded as acceptable. Each request requires a single free resource unit for each service function, such as, for example, a virtual machine (VM) [33]. Up to 14 available VMs are assumed in each EC, with equal splitting of the available CPU resources [12]. Note that different view ports lead to different models of the metaverse [9] and up to four different models can be cached. All target AROs must be integrated into the corresponding model and rendered within the frame before being streamed to the end user based on a matched result. After triggering the metaverse MAR service, pointers to identify AROs such as a name or index are usually a few bytes [34] and, hence, their transmission and processing are neglected in the following simulations. The set of available data rates is { 2 , 3 , , 8 } Mbps and its corresponding SSIM values set are { 0.955 , 0.968 , , 0.991 } [10]. We require the acceptable average SSIM above 0.97 ( Q b o u n d ). For a nominal 5 G base station, we assume its cell radius to be 250 m, its carrier frequency 2 GHz, its transmit power 20 dBm, the noise power 10 11 W, the path loss exponent to be 4, its maximum resource blocks 100 and, without the loss of generality, each user can utilize only one resource block [35,36,37]. As mentioned earlier, we accept a predefined video coding scheme H.264 with a fixed frame resolution of 1280 × 720 [10] in RGB (8 bits per pixel). Based on the given resolution, the size of foreground interactions after decoding and compressing can be calculated through multiplying the coefficients 5 9 and 10 3 [9]. Matlab on a personal PC with a CPU of intel i7, 6500U and 2 cores was employed for the simulation. Key simulation parameters are shown below, in Table 1.
In the following figures and discussion, the optimized scheme considering terminals is denoted as OptimT, while the other onem which does not include the terminals, is denoted as as OptimNT. The OptimNT scheme could be regarded as a natural extension of our previous work [16]. Two other baseline schemes sharing same caching decisions as the proposed Optim scheme were also implemented for comparison. Those are the random selection scheme (RandS) and the closest EC first scheme (CEC) [38]. The RandS scheme operates a random EC selection while the other two both select the closest EC to the user’s initial location. The CEC scheme also accepts the second closest EC as a back up choice [38].
According to Figure 5, the service delay for each request of the proposed schemes decreases, as expected, with an increasing weight μ . With a larger weight, the proposed schemes tend to select a larger data rate and direct the service to more powerful ECs, which naturally leads to a smaller overall delay. Compared to the OptimNT scheme, for example, the gain in delay of the OptimT scheme ranges from 1.9% to 10.4%. When seeking the best energy efficiency ( μ = 0 ), since the CPU resources at the terminals are also shared by other functionalities, the OptimT scheme also tries to avoid the occupation of terminals. Hence, in this case, these two schemes share similar solutions and approaches. Noticing that the proposed schemes do not care about latency cost, they could choose a further away and busy EC, which even causes the OptimNT scheme to perform worse than the CEC scheme. Afterwards, as the weight μ increases and the emphasis is placed on latency rather than energy, the OptimT scheme allocates some foreground interactions to terminals and becomes better than OptimNT. Since the baseline schemes neglect energy consumption, service decomposition and mobility, their gaps in relation to the proposed scheme become larger with increasing weight. However, such a gain in delay comes with an extra cost in energy consumption. As shown in Figure 6, the energy consumption per request of the proposed scheme increases with a larger weight. Compared to the OptimNT scheme, the OptimT scheme consumes 2.9% to 23.0% more energy under different weights. Thus, it might not always be worth enduring lots of energy consumption for narrow gains in delay. By selecting a suitable weight, a balance could be achieved with the OptimT scheme between delay and energy consumption. Through the utilization of terminals, the OptimT scheme becomes the most sensitive to energy consumption and there could be instances in which it might consume more energy than the RandS scheme.
Clearly, the user experience could be elevated through viewing more AROs in foreground interactions or more delicate scenes from background models. Figure 7 reveals the variation in delay with the increasing foreground-interaction size. When the average foreground interaction size is not too large and there are still enough resources at target ECs, OptimNT and baseline schemes increase almost linearly at a similar speed. When the size keeps increasing and resources become limited, the CEC scheme becomes the most sensitive because it only targets the several closest ECs and it is easier to trigger the penalty. The OptimT scheme, on the other hand, maintains the least latency and the least increasing tendency. To this end, it could save up to 13.8% and 51.7% delay compared, respectively, to the OptimNT scheme and the CEC scheme. Figure 8 further reveals the variation in energy in this case. Baseline schemes process foreground interactions at ECs without considering energy. Hence, their decisions are not obviously affected by the size of foreground interactions and their energy consumption increases almost linearly. The OptimNT scheme keeps finding more suitable ECs according to current foreground-interaction size and remaining resources, while the OptimT schemes further enables terminals to process some tasks. Compared to the CEC scheme, they could save over 14.3% energy. It is necessary to point out that the energy consumption will not be taken into account when redirecting the request to the farther core cloud and triggering the penalty. According to Figure 9, the background model size is much larger and could cause a significant increase in delay. Note that the terminals could take charge of some foreground interactions and, hence, could make room for background models in the OptimT scheme. It is still the best scheme in terms of delay, which could be up to 9.8% less than the OptimNT scheme. As mentioned earlier, the proactive caching and processing of background models only happen at ECs; these schemes share a similar level of increased energy consumption until triggering the overloading penalty.
The number of available VMs activated in an EC is known as the EC capacity. For a given EC capacity (e.g., 14), the ratio between the different number of requests (e.g., [30,40]) and the EC capacity could be applied to represent the average EC utilization in the network. Then, this rate was normalized into [ 0 , 1 ] for better presentation. As shown in Figure 10 and Figure 11, the increasing EC utilization rate indicates a more congested network and, hence, as expected, the delay and energy consumption increase as well. Compared to the OptimT scheme, the OptimNT scheme is still more sensitive in terms of energy but better in terms of delay. Thus, its consideration of terminals benefits delay at the cost of energy. Observe from Table 2, that even when there is no mobility event, the proposed OptimT scheme is still slightly better than other baseline schemes because its flexibility of terminals can better avoid potential EC overloading. Therefore, the proposed OptimT and OptimNT schemes have an obvious advantage over baseline schemes and are recommended in a congested network and a high-user-physical-mobility scenario. Especially when the MAR terminal still has enough energy capacity, its computing resources should not be neglected and, hence, the OptimT scheme is more suitable in this case.

5. Conclusions

Extending MAR applications into the metaverse is expected to incorporate the rendering and updating of high-quality AR metadata in order to provide a more realistic experience. Hence, such forward-looking applications are highly delay- and energy-sensitive and are significantly demanding in terms of caching and computing resources. In this paper, a joint optimization scheme is proposed by explicitly considering the model rendering performance, user mobility and service decomposition to achieve a balance between energy consumption and service delay under the constraint of user perception quality for metaverse MAR applications. Recent technical improvements in AR devices allow them to process more tasks locally. To this end, we explore their potential in the metaverse and compare the performance with terminal-oblivious schemes which reside on cloud support. A wide range of numerical investigations reveals that the proposed terminal-aware framework provides improved decision making compared to baseline schemes for energy consumption and resource allocation for metaverse MAR applications, especially under a congested network and high-mobility scenario.

Author Contributions

Z.H., writing—original draft, writing—review and editing; V.F., writing—review and editing, supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Necessary data are included in this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ball, M. The Metaverse: And How It Will Revolutionize Everything; Liveright Publishing: New York, NY, USA, 2022. [Google Scholar]
  2. Rauschnabel, P.A.; Felix, R.; Hinsch, C. Augmented reality marketing: How mobile AR-apps can improve brands through inspiration. J. Retail. Consum. Serv. 2019, 49, 43–53. [Google Scholar] [CrossRef]
  3. Rauschnabel, P.A.; Felix, R.; Hinsch, C.; Shahab, H.; Alt, F. What is XR? Towards a framework for augmented and virtual reality. Comput. Hum. Behav. 2022, 133, 107289. [Google Scholar] [CrossRef]
  4. Xu, M.; Niyato, D.; Kang, J.; Xiong, Z.; Miao, C.; Kim, D.I. Wireless Edge-Empowered Metaverse: A Learning-Based Incentive Mechanism for Virtual Reality. arXiv 2021, arXiv:2111.03776. [Google Scholar]
  5. Dong, R.; She, C.; Hardjawana, W.; Li, Y.; Vucetic, B. Deep learning for hybrid 5G services in mobile edge computing systems: Learn from a digital twin. IEEE Trans. Wirel. Commun. 2019, 18, 4692–4707. [Google Scholar] [CrossRef] [Green Version]
  6. Xu, M.; Ng, W.C.; Lim, W.Y.B.; Kang, J.; Xiong, Z.; Niyato, D.; Yang, Q.; Shen, X.S.; Miao, C. A Full Dive into Realizing the Edge-enabled Metaverse: Visions, Enabling Technologies, and Challenges. arXiv 2022, arXiv:2203.05471. [Google Scholar] [CrossRef]
  7. Chylinski, M.; Heller, J.; Hilken, T.; Keeling, D.I.; Mahr, D.; de Ruyter, K. Augmented reality marketing: A technology-enabled approach to situated customer experience. Australas. Mark. J. (AMJ) 2020, 28, 374–384. [Google Scholar] [CrossRef]
  8. Li, L.; Qiao, X.; Lu, Q.; Ren, P.; Lin, R. Rendering Optimization for Mobile Web 3D Based on Animation Data Separation and On-Demand Loading. IEEE Access 2020, 8, 88474–88486. [Google Scholar] [CrossRef]
  9. Guo, F.; Yu, F.R.; Zhang, H.; Ji, H.; Leung, V.C.; Li, X. An adaptive wireless virtual reality framework in future wireless networks: A distributed learning approach. IEEE Trans. Veh. Technol. 2020, 69, 8514–8528. [Google Scholar] [CrossRef]
  10. Kato, H.; Kobayashi, T.; Sugano, M.; Naito, S. Split Rendering of the Transparent Channel for Cloud AR. In Proceedings of the 2021 IEEE 23rd International Workshop on Multimedia Signal Processing (MMSP), Tampere, Finland, 6–8 October 2021; pp. 1–6. [Google Scholar]
  11. Yang, X.; Chen, Z.; Li, K.; Sun, Y.; Liu, N.; Xie, W.; Zhao, Y. Communication-constrained mobile edge computing systems for wireless virtual reality: Scheduling and tradeoff. IEEE Access 2018, 6, 16665–16677. [Google Scholar] [CrossRef]
  12. Huang, Z.; Friderikos, V. Proactive edge cloud optimization for mobile augmented reality applications. In Proceedings of the 2021 IEEE Wireless Communications and Networking Conference (WCNC), Nanjing, China, 29 March–1 April 2021; pp. 1–6. [Google Scholar]
  13. Hertzmann, A.; Perlin, K. Painterly rendering for video and interaction. In Proceedings of the 1st international Symposium on Non-Photorealistic Animation and Rendering, Annecy, France, 5–7 June 2000; pp. 7–12. [Google Scholar]
  14. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  15. Chen, H.; Dai, Y.; Meng, H.; Chen, Y.; Li, T. Understanding the characteristics of mobile augmented reality applications. In Proceedings of the 2018 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), Belfast, UK, 2–4 April 2018; pp. 128–138. [Google Scholar]
  16. Huang, Z.; Friderikos, V. Mobility Aware Optimization in the Metaverse. In Proceedings of the 2022 IEEE Global Communications Conference, Rio de Janeiro, Brazil, 4–8 December 2022; pp. 1–6. [Google Scholar]
  17. Song, S.; Kim, J.; Chung, J.M. Energy consumption minimization control for augmented reality applications based on multi-core smart devices. In Proceedings of the 2019 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 11–13 January 2019; pp. 1–4. [Google Scholar]
  18. Chen, K.; Li, T.; Kim, H.S.; Culler, D.E.; Katz, R.H. Marvel: Enabling mobile augmented reality with low energy and low latency. In Proceedings of the Proceedings of the 16th ACM Conference on Embedded Networked Sensor Systems, Shenzhen, China, 4–7 November 2018; pp. 292–304. [Google Scholar]
  19. Seo, Y.J.; Lee, J.; Hwang, J.; Niyato, D.; Park, H.S.; Choi, J.K. A novel joint mobile cache and power management scheme for energy-efficient mobile augmented reality service in mobile edge computing. IEEE Wirel. Commun. Lett. 2021, 10, 1061–1065. [Google Scholar] [CrossRef]
  20. Han, Y.; Niyato, D.; Leung, C.; Miao, C.; Kim, D.I. A dynamic resource allocation framework for synchronizing metaverse with iot service and data. arXiv 2021, arXiv:2111.00431. [Google Scholar]
  21. Jiang, Y.; Kang, J.; Niyato, D.; Ge, X.; Xiong, Z.; Miao, C. Reliable coded distributed computing for metaverse services: Coalition formation and incentive mechanism design. arXiv 2021, arXiv:2111.10548. [Google Scholar]
  22. Chu, N.H.; Hoang, D.T.; Nguyen, D.N.; Phan, K.T.; Dutkiewicz, E. MetaSlicing: A Novel Resource Allocation Framework for Metaverse. arXiv 2022, arXiv:2205.11087. [Google Scholar]
  23. Ng, W.C.; Lim, W.Y.B.; Ng, J.S.; Xiong, Z.; Niyato, D.; Miao, C. Unified resource allocation framework for the edge intelligence-enabled metaverse. arXiv 2021, arXiv:2110.14325. [Google Scholar]
  24. Gemici, Ö.F.; Hökelek, İ.; Çırpan, H.A. Modeling Queuing Delay of 5G NR With NOMA Under SINR Outage Constraint. IEEE Trans. Veh. Technol. 2021, 70, 2389–2403. [Google Scholar] [CrossRef]
  25. Cho, Y.S.; Kim, J.; Yang, W.Y.; Kang, C.G. MIMO-OFDM Wireless Communications with MATLAB; John Wiley & Sons: Hoboken, NJ, USA, 2010. [Google Scholar]
  26. Wang, Y.; Haenggi, M.; Tan, Z. The meta distribution of the SIR for cellular networks with power control. IEEE Trans. Commun. 2017, 66, 1745–1757. [Google Scholar] [CrossRef]
  27. Zhang, W.; Wen, Y.; Guan, K.; Kilper, D.; Luo, H.; Wu, D.O. Energy-optimal mobile cloud computing under stochastic wireless channel. IEEE Trans. Wirel. Commun. 2013, 12, 4569–4581. [Google Scholar] [CrossRef]
  28. Miettinen, A.P.; Nurminen, J.K. Energy efficiency of mobile clients in cloud computing. In Proceedings of the 2nd USENIX Workshop on Hot Topics in Cloud Computing (HotCloud 10), Boston, MA, USA, 22 June 2010. [Google Scholar]
  29. Amdahl, G.M. Validity of the single processor approach to achieving large scale computing capabilities. In Proceedings of the Spring Joint Computer Conference, Atlantic City, NJ, USA, 18–20 April 1967; pp. 483–485. [Google Scholar]
  30. Cozzolino, V.; Tonetto, L.; Mohan, N.; Ding, A.Y.; Ott, J. Nimbus: Towards Latency-Energy Efficient Task Offloading for AR Services. IEEE Trans. Cloud Comput. 2022. [Google Scholar] [CrossRef]
  31. Niu, G.; Chen, Q. Learning an video frame-based face detection system for security fields. J. Vis. Commun. Image Represent. 2018, 55, 457–463. [Google Scholar] [CrossRef]
  32. Naman, A.T.; Xu, R.; Taubman, D. Inter-frame prediction using motion hints. In Proceedings of the 2013 IEEE International Conference on Image Processing, Melbourne, VIC, Australia, 15–18 September 2013; pp. 1792–1796. [Google Scholar]
  33. Liu, Q.; Huang, S.; Opadere, J.; Han, T. An edge network orchestrator for mobile augmented reality. In Proceedings of the IEEE INFOCOM, Honolulu, HI, USA, 16–19 April 2018. [Google Scholar]
  34. Zhang, A.; Jacobs, J.; Sra, M.; Höllerer, T. Multi-View AR Streams for Interactive 3D Remote Teaching. In Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology, Osaka, Japan, 8–10 December 2021; pp. 1–3. [Google Scholar]
  35. Korrai, P.K.; Lagunas, E.; Sharma, S.K.; Chatzinotas, S.; Ottersten, B. Slicing based resource allocation for multiplexing of eMBB and URLLC services in 5G wireless networks. In Proceedings of the 2019 IEEE 24th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD), Limassol, Cyprus, 11–13 September 2019; pp. 1–5. [Google Scholar]
  36. Li, S.; Lin, P.; Song, J.; Song, Q. Computing-Assisted Task Offloading and Resource Allocation for Wireless VR Systems. In Proceedings of the 2020 IEEE 6th International Conference on Computer and Communications (ICCC), Chengdu, China, 11–14 December 2020; pp. 368–372. [Google Scholar]
  37. Chettri, L.; Bera, R. A comprehensive survey on Internet of Things (IoT) toward 5G wireless systems. IEEE Internet Things J. 2019, 7, 16–32. [Google Scholar] [CrossRef]
  38. Toczé, K.; Nadjm-Tehrani, S. ORCH: Distributed orchestration framework using mobile edge devices. In Proceedings of the 2019 IEEE 3rd International Conference on Fog and Edge Computing (ICFEC), Larnaca, Cyprus, 14–17 May 2019; pp. 1–10. [Google Scholar]
Figure 1. The general work flow of metaverse AR applications with delay in each stage (6 EC, 30 requests, weight μ is 1, EC Capacity is 14 and total mobility probability is 1).
Figure 1. The general work flow of metaverse AR applications with delay in each stage (6 EC, 30 requests, weight μ is 1, EC Capacity is 14 and total mobility probability is 1).
Futureinternet 15 00047 g001
Figure 2. Illustrative toy examples of caching by a metaverse MAR application. (a) Traditional caching without user mobility and service decomposition. (b) Proactive caching with user mobility and service decomposition.
Figure 2. Illustrative toy examples of caching by a metaverse MAR application. (a) Traditional caching without user mobility and service decomposition. (b) Proactive caching with user mobility and service decomposition.
Futureinternet 15 00047 g002aFutureinternet 15 00047 g002b
Figure 3. Illustrative toy examples of different activated nodes in metaverse.
Figure 3. Illustrative toy examples of different activated nodes in metaverse.
Futureinternet 15 00047 g003
Figure 4. A typical tree-like designed network topology.
Figure 4. A typical tree-like designed network topology.
Futureinternet 15 00047 g004
Figure 5. Overall delay with weight μ (6 EC, 30 requests, EC capacity is 14 and total mobility probability is 1).
Figure 5. Overall delay with weight μ (6 EC, 30 requests, EC capacity is 14 and total mobility probability is 1).
Futureinternet 15 00047 g005
Figure 6. Average energy consumption with weight μ .
Figure 6. Average energy consumption with weight μ .
Futureinternet 15 00047 g006
Figure 7. Overall delay with foreground interaction size (6 EC, 30 requests, μ = 0.5 , EC capacity is 14 and total mobility probability is 1).
Figure 7. Overall delay with foreground interaction size (6 EC, 30 requests, μ = 0.5 , EC capacity is 14 and total mobility probability is 1).
Futureinternet 15 00047 g007
Figure 8. Average energy consumption with foreground interaction size (6 EC, 30 requests, μ = 0.5 , EC capacity is 14 and total mobility probability is 1).
Figure 8. Average energy consumption with foreground interaction size (6 EC, 30 requests, μ = 0.5 , EC capacity is 14 and total mobility probability is 1).
Futureinternet 15 00047 g008
Figure 9. Overall delay with background model size (6 EC, 30 requests, μ = 0.5 , EC capacity is 14 and total mobility probability is 1).
Figure 9. Overall delay with background model size (6 EC, 30 requests, μ = 0.5 , EC capacity is 14 and total mobility probability is 1).
Futureinternet 15 00047 g009
Figure 10. Overall delay with average EC utilization rate (6 EC, μ = 0.5 and total mobility probability is 1).
Figure 10. Overall delay with average EC utilization rate (6 EC, μ = 0.5 and total mobility probability is 1).
Futureinternet 15 00047 g010
Figure 11. Energy consumption with average EC utilization rate (6 EC, μ = 0.5 and total mobility probability is 1).
Figure 11. Energy consumption with average EC utilization rate (6 EC, μ = 0.5 and total mobility probability is 1).
Futureinternet 15 00047 g011
Table 1. Simulation parameters.
Table 1. Simulation parameters.
ParameterValue
Number of available ECs6
Number of available VMs per EC (EC capacity)14
Number of requests30
Number of available models per user4
AR object size ( 0 , 10 ] MByte
Total moving probability [ 0 , 1 ]
Cell radius250 m
Remained cache capacity per EC [ 100 , 400 ] MByte
EC CPU frequency[4,8] GHz
EC CPU cores[4,8]
EC CPU core portion per VM0.25–0.5
Remained cache capacity per terminal [ 0 , 100 ] MByte
Terminal CPU frequency1 GHz
Terminal CPU cores4
CPU architecture coefficient 10 18
Foreground-interaction computational load4 cycles/bit
Background-content-checking computational load10 cycles/bit
Carrier frequency2 GHz
Transmission power20 dBm
Path loss exponent4
Noise power 10 11  W
Number of resource blocks100
Frame resolution1280 × 720
Average latency per hop2 ms
Cache miss penalty25 ms
Table 2. Overall delay in no-mobility event ( μ = 1 , 6 ECs, 30 requests and EC capacity is 14).
Table 2. Overall delay in no-mobility event ( μ = 1 , 6 ECs, 30 requests and EC capacity is 14).
SchemeOptimTOptimNTCFSRandS
Delay (ms)38.840.140.760.8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, Z.; Friderikos, V. Optimal Mobility-Aware Wireless Edge Cloud Support for the Metaverse. Future Internet 2023, 15, 47. https://doi.org/10.3390/fi15020047

AMA Style

Huang Z, Friderikos V. Optimal Mobility-Aware Wireless Edge Cloud Support for the Metaverse. Future Internet. 2023; 15(2):47. https://doi.org/10.3390/fi15020047

Chicago/Turabian Style

Huang, Zhaohui, and Vasilis Friderikos. 2023. "Optimal Mobility-Aware Wireless Edge Cloud Support for the Metaverse" Future Internet 15, no. 2: 47. https://doi.org/10.3390/fi15020047

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop