You are on page 1of 13

Computer Networks 145 (2018) 76–88

Contents lists available at ScienceDirect

Computer Networks
journal homepage:

User incentive model and its optimization scheme in

user-participatory fog computing environment
Won-Suk Kim, Sang-Hwa Chung∗
Pusan National University, 6510, Engineering Bldg #6, Busan 609-735, Geumjeong-gu, Republic of Korea

a r t i c l e i n f o a b s t r a c t

Article history: Although fog computing is recognized as an alternative computing model to cloud computing for IoT,
Received 12 September 2017 it is not yet widely used. The replacement of network equipment is inevitable to implement fog com-
Revised 7 August 2018
puting; however, the entity in charge of replacement, requiring high cost, is unclear, and also the entity
Accepted 22 August 2018
in charge of operating of the infrastructure is unclear. To solve these feasibility problems, we propose
Available online 24 August 2018
an incentive-based, user-participatory fog computing architecture. In terms of inducement of user par-
Keywords: ticipation on the proposed architecture, first, users are classified into four categories according to their
Internet of things tendencies and conditions, and the types of incentives, which are paid as a compensation for participa-
Fog computing tion, the payment standard, and the operation model are presented in detail. From the perspective of
Software defined networking fog service instance deployment, the instances should be deployed to reasonably minimize the incentives
Fog container placement paid to the participating users of the proposed architecture, which is directly linked to maximizing the
User incentive
profitability of the infrastructure operator, while maintaining the performance. The optimization problem
for the instance placement to achieve above design goal is formulated with a mixed-integer nonlinear
programming, and then linearized. The proposed instance placement scheme is compared with several
schemes through simulations based on actual service workload and device power consumption.
© 2018 The Authors. Published by Elsevier B.V.
This is an open access article under the CC BY-NC-ND license.

1. Introduction telecommunication base stations are used as fog devices, which

are physical devices that operate fog server instances, to provide
Fog computing is a new computing model proposed to solve IoT services, such as smart traffic systems or disaster detection ser-
various problems, such as high latency, computing and traffic load vices, and network access simultaneously [8]. In Autonomous Driv-
concentration, and impossibility of location awareness, which may ing, it is possible to utilize the location awareness feature of fog
occur when Internet of Things (IoT) services operate in a cloud computing [9]. The advanced driver assistance system of the au-
data center [1]. This concept extends existing cloud services, such tonomous vehicle can efficiently utilize traffic information and lo-
as computing, storage, and networks to the network edge near the cal features, as well as events on roads at nearby locations, such as
users or devices. This implies that some of the roles of cloud data construction or accidents, which is rapidly received from a nearby
centers are moved to a number of geographically distributed phys- fog device [10]. In the rendering delegation service, when showing
ical network devices, such as routers, switches, Wi-Fi access points information through augmented reality, operations such as aug-
(APs), and IoT gateways (GWs). Therefore, the application of fog mented reality output data analysis and video processing, which
computing can naturally reduce the response time of network ser- require substantial computing resources and utilize local informa-
vices, support real-time services, and enable networking consider- tion, can be processed using computing resources of adjacent fog
ing location information while realizing all the advantages of cloud devices [11]. This is possible because fog computing has a signifi-
computing [2–5]. cantly lower latency than cloud computing.
The main use cases of fog computing consist of Smart City, A fog device can be primarily a network device and is typically
Smart Building, Autonomous Driving, and the rendering delega- located between network endpoints, such as the devices and users,
tion service [6,7]. In Smart City, Wi-Fi APs, IoT GWs, and mobile and a core network. Unfortunately, if an edge router of the core
network becomes a fog device, the overall network performance
can be degraded. The edge router provides existing network func-

Corresponding author. tions in heavy load, and cannot reflect the local characteristics ad-
E-mail address: (S.-H. Chung).
1389-1286/© 2018 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license. (
W.-S. Kim, S.-H. Chung / Computer Networks 145 (2018) 76–88 77

network service in the form of a Docker container operating on

Abbreviations a fog device. The developer must consider all aspects of the con-
tainer development, including the connection to the cloud, con-
Symbol Description tainer hierarchy, and characteristics of the location for deployment.
PI a participation incentive for users This developmental difficulty becomes a barrier to the service op-
SI a sharing incentive for users erator making it more difficult to determine the initiation of the
NU a non-participating user fog service. Accordingly, the operator considers launching the fog
EPU a participating user using excessive resources service only after the appropriate fog computing infrastructure has
PPU a private participating user been established in at least the target area, such as a city, country,
SPU a sharing participating user and the world.
MPC a minimizing power consumption scheme The key challenging point of the feasibility of fog computing
MPU a minimizing power consumption considering is the main agent of realization. Since fog computing is based on
owner usage scheme the use of edge network devices, internet service providers (ISPs),
MSI a minimizing sharing incentive scheme mobile carriers, and switch vendors can be the realizers, individ-
ually or collaboratively. In terms of extending the capabilities of
the cloud data center to the network edge, the cloud operator
equately. Therefore, it is ideal that switches, hubs, Wi-Fi APs, IoT can be the realization entity. Moreover, in terms of expanding the
GWs, and base stations in the local network are general purpose functionality of the IoT service itself, the IoT service provider can
fog devices. That is, referring to [12], the fog device in this paper install and operate the fog devices directly. However, proper in-
can be a “smart” local network equipment enriched with IT capac- frastructure is required for fog computing to operate in the afore-
ities [13–15]. Existing devices, such as sensor nodes, IoT GWs, and mentioned correct manner, or as a general-purpose tool. In other
smartphones, which are located at a considerable edge of the net- words, it is not reasonable to install dedicated fog devices directly
work, are not suitable for consideration in this study because they in target areas for an IoT service provider to provide its fog com-
may be positioned to provide dedicated services only. puting service to users, or to install a fog device in an area where
A general-purpose physical server, not a local network device, the network infrastructure operator is not profitable.
can also be a fog device, and the server must be connected to The challenging issues in the feasibility of fog computing can
a specific local network device for location awareness feature. In be summarized as follows.
terms of service availability, the server should also prevent unex-
pected events such as power down. Installing this additional high (1) The main agent of fog computing should be determined in-
availability server in close proximity of specific local network de- dependently of existing interests; that is, an independent re-
vice is similar to replacing the network devices. In other words, alization and intermediation agent is required.
installing an additional server to existing local network device and (2) The cost of purchasing, installing, and maintaining devices
replacing the network device can be regarded as the same opera- for building a fog computing infrastructure should be con-
tion. vincingly shared.
As with the cloud data center, hardware virtualization is re- (3) The participants in the fog computing infrastructure con-
quired to provide dynamic resource provisioning to support flex- struction should be provided with appropriate compensation
ibility and scalability utilizing the fog devices. There are two main by those who benefit from fog computing.
types of hardware virtualization, a hypervisor-based virtual ma- (4) Fog containers should be deployed in consideration of re-
chine (VM) and a container. The main difference between these source requirements of fog services, service usage by the
two virtualization is that the VM includes the guest OS and the users, and the profitability of the intermediation agent.
container does not. That is, the VM provides independent hardware
through full hardware virtualization, while the container only sup- 1.2. Contributions
ports resource isolation by sharing the kernel. Although each tech-
nique has advantages and disadvantages, the container represented We propose the incentive management method for the fog
by Docker is very easy to deploy and requires less resources than computing architecture based on an independent management en-
the VM [16]. Therefore, in this study, the fog service instance op- tity called a fog portal to address the aforementioned challenges.
erates based on the container. The proposed system is an incentive-based, user-participatory ar-
chitecture that solves the problem of fog computing infrastructure
1.1. Motivations and challenges construction and expansion by users, such as general users, public
agency employees, and network administrators.
Although fog computing has been recognized as a suitable com- The difference between the existing fog computing model,
puting model for the IoT era in many studies, it has not yet been which is proposed by Cisco, and the proposed model lies in the
widely used in real industry [5,10,13]. The use cases are not lack- construction method, not the operating method [1]. In the existing
ing, but the feasibility is a significant challenge because of its na- model, one of the aforementioned operators constructs and man-
ture. In other words, there may exist questions such as who will ages the infrastructure directly, and enters into a lease contract
construct this costly infrastructure, who will manage the resources with the fog service operator. On the other hand, in the proposed
of fog devices, and who will create revenue with fog computing. model, the participating user directly constructs the infrastructure,
The factors that impede the feasibility of fog computing are the the fog portal manages the user devices, and the service operator
realization agent and capital expenditure (CAPEX). The role of ex- enters into a contract with the fog portal. As mentioned above, the
isting cloud servers includes data processing and storage in net- construction process of the proposed model is progressive com-
work devices; this role must be performed, which inevitably neces- pared to the existing model, and the construction cost is effectively
sitates the replacement of the firmware and hardware of the net- shared. In addition, the participating users can receive a reasonable
work equipment. This is equivalent to an infrastructure company incentive from multiple network-related operators and other users,
spending a huge CAPEX in a business that may not be profitable thereby offsetting the cost of their own fog device. In terms of op-
[17]. The network service developer should develop a fog container eration, in the proposed architecture, the containers are placed to
separately to utilize fog computing; this is a server instance of a maximize the incentive fee income of the fog portal business uti-
78 W.-S. Kim, S.-H. Chung / Computer Networks 145 (2018) 76–88

lizing the power consumption based on the user contribution and

the user usage.
The user purchases the fog device directly, connects it to the
local network, and registers it in the fog portal. The participating
user delegates the operation authority of his/her device to the fog
portal, and declares the desired fog service, which is the network
service based on fog computing. At the same time, in return for
providing infrastructure, the user receives incentives such as com-
munication charge reductions from ISPs or revenue sharing from
the fog portal.
The fog portal can provide detailed service-specific usage
through collaboration with the software-defined networking (SDN)
controller in the local network; this is provided to the network ser-
vice operator in the form of a local usage distribution. The deci-
sion to develop and distribute the fog container is entirely up to
the operator. When the operator decides to initiate fog service in a
particular area, the fog manager of the local network covering the
area determines the fog container placement location based on the Fig. 1. Power consumption of CPU, memory, network, and disk for various comput-
resource usage and other policies. ing processes [19].
The fog portal collects fees from non-participating users to pro-
vide incentives for sharing to participating users who share re-
sources. It is assumed that the participating users aim to primarily intensive processes, such as image encoding/decoding, decompres-
share the resources of their devices with the non-participating or sion, big data analysis, and real-time video processing.
other participating users, and receive the sharing incentives from Unfortunately, common workload units do not yet exist, and the
the fog portal. In this situation, the container placement process workload measurement method varies considerably with the hard-
by the fog manager is performed to reasonably minimize the shar- ware type. For example, considerations for measuring the memory
ing incentive for the participating users. Minimizing the sharing utilization of a particular process include memory type, architec-
incentives is directly linked to maximizing revenue on the fog ture, number of reads and writes, cache miss rate, and even mem-
portal, which is achieved by minimizing the power consumption ory block ambient temperature. In the case of CPU, it is almost im-
of the workload while maintaining network usage at a level that possible to accurately measure workload owing to the complexities
does not cause congestion. This optimization model is able to ac- of multicore and virtualization. For these reasons, there is no com-
curately reflect the actual business model of the incentive-based, prehensive unit with which to represent the computational work-
user-participatory fog computing. load, such as frames per second, and the power consumption of
The contribution of the proposed model to the feasibility of fog each hardware is widely used as an alternative to the workload.
computing can be summarized as follows. The modeling of the power consumption of each hardware is be-
yond the scope of this study, and the relationship between work-
(1) The incentive-based, user-participatory fog computing based
load and power consumption is referred to other studies [18]. This
on the independent fog computing manager called the fog
study presents a statistical model that provides runtime predic-
portal can contribute to the feasibility of fog computing.
tions of the power consumption of the server blades. The model
(2) Since the establishment of fog infrastructure should be car-
considers key thermal and system parameters, such as ambient
ried out by replacing the network equipment over a wide
temperature, board temperature, and hardware performance coun-
area, validity of user participation is suggested.
ters, as metrics for system energy consumption within a given
(3) User participation in fog infrastructure construction can be
power and thermal envelope.
accelerated by using reasonable user incentive payment cri-
In addition, studies have been conducted on component-level
teria, such as the level of computing resource sharing.
power consumption, power dissipation, and fluctuations per server
(4) In terms of fog container placement, we propose a container
blade workload [19]. Fig. 1 shows the power consumption of the
placement optimization scheme that maximizes the prof-
CPU, memory, networking, and disk for various computing pro-
itability of the fog portal while taking into account the major
cesses. Intuitively, the power consumption of a network or stor-
resources of fog computing such as network bandwidth and
age does not differ significantly for each computation process, and
only the power consumed by the CPU and memory varies with
As a result, this proposed model, which can improve the fea- the workload. Further, the power consumption of the disk is suf-
sibility of fog computing, will ultimately be a rational business ficiently insignificant to be neglected. In this study, the workload
model for fog computing. used by each service is limited to CPU and memory usage, and the
workload value is simulated by substituting the power consump-
2. Related works tion.
Since the introduction of fog computing concepts, fog comput-
As fog computing regards a network switch as a fog device, ing has been actively researched. In the early years of research,
some objectives of fog computing cannot be achieved by consid- studies focused on the usability of fog computing; the focus of re-
ering only the networking factors of the switch. In terms of fog search subsequently shifted, such that resource management and
container placement, the following are important considerations: architecture are current areas of research. The availability of fog
the workload required by the containers placed in the fog de- computing has been suggested as a method of providing new types
vices, available workload of the fog devices, required bandwidth of of IoT services [20–22]. It is referred to as an enabler of a spe-
the network services, number of hops between the containers and cific service, in addition to being able to simply improve the per-
users, and communication patterns of the services. Among these, formance of the existing service or to provide a real-time service.
the workload refers to the amount of work given to the CPU, mem- Fine-grained design for management of fog device resource is
ory, disk, I/O bus, and motherboard to perform computationally essential because fog infrastructure comprises heterogeneous re-
W.-S. Kim, S.-H. Chung / Computer Networks 145 (2018) 76–88 79

sources in computing and networking. Nishio et al. [23] proposed sists of four layers, namely a data generator layer, fog layer, cloud
a mathematical framework and architecture for heterogeneous re- layer, and data consumer layer. The data generator layer consists of
source sharing based on service-oriented utility functions, and op- various high reliability low cost sensor nodes and mobile devices,
timized service delay. It is necessary to consider heterogeneous which are widely distributed in public infrastructure to monitor
resources in the development of a fog container. However, the the state change over time. In the fog layer, the fog devices con-
differential contributions of users and the provision of services nect to a local group of sensors to perform data analysis quickly.
as rewards for that were not considered. Hong et al. [24] pro- The devices collecting the data from the generators rapidly respond
posed a simplified programming abstraction called PaaS program- to the data consumers and transfer the data to the cloud layer for
ming model that can orchestrate highly dynamic heterogeneous re- big data analysis. The cloud layer performs big data analysis and
sources at different levels of network hierarchy and support low operations such as large-scale event detection, long-term pattern
latency and scalability. recognition, and relationship modeling. Finally, the data consumers
In addition, there are studies on fog device selection schemes consist of a wide range of entities, including individual users, ac-
that can reduce carbon dioxide gas emissions on the green tech- tuators, companies, and research institutions, request and receive
nology side, and studies that allocate resources to preferentially specific categories of sensing data from various layers.
use eco-friendly energy-based data centers [25–27]. Yao et al. The system in the aforementioned study groups the consumer,
[28] proposed a technique to reduce the latency of vehicular cloud the fog access point providing access to the infrastructure, and the
computing by using vehicles and road side units in the vehicle ad- VM, which is the provider operating in all fog devices. A link band-
hoc networks. In particular, in terms of the VM migration, an op- width between them was allocated to process the data requests
timization that ultimately minimizes network cost has been sug- from the consumer. That is, the VM was placed to minimize the
gested. Deng et al. [29] studied the tradeoffs between power con- allocated link bandwidth between the consumer and the closest
sumption and delay in terms of interplay and cooperation between fog access point, and between the fog access point and the fog de-
cloud computing and fog computing. These studies have analyzed vice on which the VM with the data desired by the consumer was
key elements to fog computing through an in-depth approach, but located. However, this study considered only specific services, such
assumed that the fog device was already widely deployed and all as crowd sensing, and assumed that fog access points that provide
its resources can be controlled overall. In other words, the opti- access to the infrastructure with the desired data were densely lo-
mization of fog container placement was described entirely from cated throughout the city. Further, it has the limitation of not con-
a network point of view, which is not suitable for the user-driven sidering the main resources of fog computing such as the workload
fog infrastructure construction model presented in this paper. and storage.
There are many researches on fog computing operation archi- The fog container placement is similar to VM deployment in
tecture [30]. Among them, OpenFog is aiming for an open fog cloud computing from a resource utilization perspective. There are
computing architecture and is developing its reference architec- a number of studies on VM deployment schemes to reduce power
ture to ensure a complete interoperability and security system [31]. consumption in the cloud data center. Gupta et al. [36] proposed
Masip-Bruin et al. [32] has suggested a hierarchical structure in a scheme to reduce resource waste, such as idle memory increase,
which fog and cloud concepts are mixed. The mixed concept can when VMs with high CPU usage were deployed on single physi-
be built through traditional clouds, intermediate cloud/fog systems, cal machines (PMs). This scheme considered the usage of the CPU,
and edge fog devices. The different fogs and clouds are defined as memory of each VM, and computing capacity of the PM. Moreover,
layers setting a hierarchical architecture where the service running this scheme normalized the resource usage by grouping the VMs to
on the users’ device can decide the best suited fog/cloud resource be deployed, and minimized the number of PMs through the de-
on the fly. However, as described above, the difference between the ployment of the VM groups. However, since this scheme assumed
existing researches and the proposed model lies in the construc- a cloud environment, there is no consideration of network usage
tion form an emphasis on the feasibility, which means increasing and various types of services and devices.
the possibility of actual implementation. In this paper, we propose the incentive management method
Participation inducement by user incentives is a policy used in that work with the user-participatory fog computing architecture
a wide variety of areas. The incentive in named data networking that is our previous study [37]. Unlike the basic operations of the
works as efficiently as BitTorrent [33]. However, there is a problem previous study, the participating users receive incentives based on
that attention should be paid to the incentive computation over- the amount of shared resources, and the fog manager deploys con-
head rather than positive effect depending on the protocol speci- tainers to minimize incentives reasonably. In the simulation, we
ficity. Li et al. [34] proposed a technique for introducing user in- compare the placement scheme that minimizes power consump-
centives to the cellular offloading system. The incentive paid to the tion, the scheme that considers only the usage of the users, and
users in the congestion area to compensate for degraded experi- the proposed optimization scheme, in terms of sharing incentives.
ence, and it also used to maximize the profit of the operator and to
perform network scheduling. As with these utilization methods, in-
centives can be used for a variety of purposes, sometimes to max- 3. User incentives in user-participatory fog computing
imize the profitability of the operator.
The main research area of fog computing is fog container place- 3.1. User-participatory fog computing architecture
ment [2,5,11]. The container placement comprehensively means
placing a fog server instance, in the form of a Docker container The realization of fog computing can be achieved through col-
or a VM, in a suitable fog device for the purpose of performance laboration between cloud operators, IoT service operators, network
enhancement and the like. Container placement optimization is a infrastructure providers, and switch vendors. However, in this ex-
type of bin packing problem that places containers with individual isting realization model, when a network service operator desires
characteristics into a fog device with limited resources to achieve to provide a fog service in a specific area, the following parame-
specific goals such as minimizing power consumption. ters are unclear: the method of determining the specific area, the
Among the studies related to container placement, there is con- source of traffic usage information, the device of the specific ven-
tainer placement and mapping optimization study to minimize dor to be used, the network infrastructure of the specific provider
power consumption of hierarchical fog computing architecture for to be used, and the entity responsible for routing traffic to the fog
specific services such as crowd sensing [35]. The architecture con- container.
80 W.-S. Kim, S.-H. Chung / Computer Networks 145 (2018) 76–88

test and debug
Cloud fog containers
(Docker images) Fog Container
Data (Docker Image)
Center Service on his own VM

Service's Cloud Server (VM)

Network and Computing

Resources Monitoring /
Fog Container Operations

Fog Portal
Fog Container
Admission by
Mediation of the
Fog Portal


Individual Users or
Network Administrators
(The owner of devices)
Manager Fog Fog
Switch containers
Cooperation for Device
Fog Computing Local Network Attachment
SDN Fog Fog
Campus, Building, WiFi AP containers
City, or Town

Fig. 2. Conceptual diagram of the proposed incentive-based, user-participatory fog computing architecture.

In this paper, we propose the incentive-based, user- fog portal. If it is determined that the fog service is required, the
participatory fog computing architecture based on the fog portal service developer implements the fog container in the form of a
to address these parameters. Fig. 2 shows the conceptual diagram Docker image, and the implemented fog container is deployed by
of the proposed architecture. The fog portal is a server located on the fog portal. More detailed basic operations of this model have
the internet, and it performs resource mediation between users been presented in our previous work [37].
and network services. The fog container placement is handled by
the fog manager within the local network, and a fog device, such 3.2. User incentives
as a switch, hub, Wi-Fi AP, and IoT GW, is installed directly by the
user who wants to benefit from the desired fog services. The user The ideal goal of users participating in the fog infrastructure
connects the device to the local network and registers the device construction is to improve the performance of their main network
information in the fog portal. By purchasing and registering a fog services through fog computing. With this goal, the user purchases
device, the user is participating in infrastructure construction and and installs a fog device to improve performance through the fog
is called a participating user. service, or to use a service that operates based on fog computing
The fog manager is located within the local network and moni- only. For example, a user may declare a specific smart home ser-
tors and controls the computing resources of the fog devices in the vice to the fog portal by installing and registering a Wi-Fi AP as
network. That is, the fog manager tracks the resource usage of all a fog device. In this case, the device operates as the AP and the
containers in the network, maintains the available resource infor- smart home hub. Similarly, a user in home can lease the comput-
mation of each device, and synchronizes the information with the ing resources of a fog device in the home to smoothly run smart-
portal. In other words, similar to the SDN controller, the fog man- phone 3D games that require high computing capacity. The fog de-
ager is a local entity that has control over computing resources in vice can be operated as dedicated equipment for network services,
the network. such as healthcare, telemedicine, video security, and power mea-
The fog portal analyzes and processes detailed statistical infor- surement services.
mation gathered from the SDN controller, and provides such in- Ideally, the user participates in expanding the fog infrastructure
formation to the appropriate network service operator. The service to use fog services, but this goal may be limited to increasing the
operator determines the necessity of the fog service based on the likelihood of participation. Compared with the existing realization
information received through the user interface provided by the model, the proposed model has an additional operation whereby
W.-S. Kim, S.-H. Chung / Computer Networks 145 (2018) 76–88 81

Payment Entities Payment Basis

Using device resources of the participating user
Non-participating User
Incentive Using device resources of other participating user
Participating User
using resources excessive

OPEX reduction by fog computing

Participating OPEX reduction by fog computing
Cloud Operator

Sharing revenue for the user participation

Fog Portal

Fig. 3. Types of incentives and corresponding payment basis.

the fog portal acquires the operating authority for the fog device portal is required to pay part of the net income to the participating
in order to mediate resources. This could cause the user to believe users as the PI. The participating users can install the fog devices
that their device may experience disadvantages such as operation with various performance levels and determine policies regarding
at unexpected points, exposure to security threats, and excessive the level of computing resources of the device to be shared. The
power usage. This leads to a decrease in the participation rate. profitability of the portal differs considerably according to the re-
Therefore, appropriate incentives must be paid to induce the users source sharing policy of the users. If the user sets the policy to
to delegate the operating authority of their devices to the portal. share all resources, the portal can put more network services into
In other words, the incentives are an indispensable component of the network and increase the profitability. In addition, flexibility
user participation. and resilience are increased for operations such as container place-
Fog computing is an intertwined architecture of various busi- ment. In contrast, if it is set not to lease the resources that exceed
ness operators; thus, the incentive payment relationships can be- the usage of the service used by the user, there may be insufficient
come complex. Fig. 3 shows the kinds of incentives paid to the resources to put the network service into the network, and the
participating user. In this case, an incentive is a payment to a container placement flexibility becomes comparatively low. Thus,
value provider from a beneficiary, made in various ways. Since the PIs provided by the fog portal are paid in proportion to the
the proposed architecture is user-participatory, it is necessary to amount of shared resources of the participating users, not the per-
describe the incentives from the perspective of the participating formance of the device.
users. In other words, for incentives, the giving, receiving enti- In the proposed model, users are classified into four categories.
ties and the payment criteria need to be defined. In the proposed The categories consist of a non-participating user (NU), participat-
model, the incentives are paid to the participating users from the ing user using excessive resources (EPU), private participating user
non-participating users, the participating users using resources ex- (PPU), and sharing participating user (SPU). It is possible to distin-
cessive, ISPs, cloud operators, and the fog portal, based on each guish NUs from participating users depending on whether or not
criteria. their fog device is registered. Participating users are identified by
The types of incentive can be classified as a participation in- whether they are using more resources than they have contributed.
centive (PI) and a sharing incentive (SI). PI is the incentive paid in The users who use excessive resource are EPU, otherwise they are
return for participation in fog infrastructure construction. The ben- general participating users. The general participating users are di-
efits of establishing fog infrastructure are not limited to users. Fog vided into PPU and SPU according to their own sharing policies or
computing has the ability to effectively handle the traffic of net- tendencies. The PPU is the user who does not want to provide sur-
work services to be processed through the core network, within plus resources to other users, and the SPU is the user who wants
the local network. This core feature can significantly reduce uti- to lease resources to receive the SIs. In other words, PPUs do not
lization of the cloud data center and the network infrastructure, want any additional power consumption above their own usage,
leading to lower operating expenses (OPEX). However, it is very and SPUs want to rent the resource and obtain the SI regardless
difficult to determine which the participating user has contributed of power consumption. The user must bear the power cost for the
to the OPEX savings of the business operators. To determine the power consumed at his/her device, and, of course, the SI should
contribution, the following should be considered for each partici- be higher than the additional power usage fee resulting from the
pating user: performance of the fog device provided by the partici- workload used by other users. Because electricity bills for the same
pating user, resource usage of the user, the network traffic reduced power consumption vary according to local laws, it is assumed that
by the user device, and the characteristics of the network services the participating users of the proposed system will set the resource
used by the user. The container placement periodically performed sharing policy to “do not share” if the additional rate is higher than
by the fog manager further increases the difficulty of the decision. the SI for excess workloads.
Therefore, the ideal incentive payment criteria for infrastructure In addition to the PI, SI is also essential for operational re-
operators is the even distribution of part of the OPEX savings to silience and user engagement. In particular, it is essential to pro-
all participating users, regardless of device performance or usage. vide appropriate incentives to the participating users to support
The fog portal generates revenue from the network services in NUs and EPUs. When the NU wants to use the fog service, he/she
the form of monthly fees, and the participating users have a signif- pays a fee to the fog portal in proportion to the usage amount.
icant stake in the revenue, similar to crowdfunding. Therefore, the Likewise, when the EPU uses the device resources of other partic-
82 W.-S. Kim, S.-H. Chung / Computer Networks 145 (2018) 76–88

Table 1
Summary of user incentives.

Incentive Payment entity Criteria NU EPU PPU SPU


PI ISPs Participation – O O O
PI Cloud operators Participation – O O O
PI Fog portal Resource provided with – Available resource User’s resource Available resource (Total
the infrastructure usage device resource)
SI Other users Leased resource Pay the fees Pay the fees – Excess usage according to
the container placement

Table 2 are high, the number of NUs or EPUs and the profit of the SPUs de-
Parameters of system model.
crease. If they are low, the expectation of incentives and the pos-
sibility of user participation decrease. As such, the determination
Symbol Description of the base incentive for a particular unit of resource usage estab-
U The set of users in the network lishes a tradeoff relationship. Obtaining appropriate base incentives
S The set of services in the network and fees for leased resource units is beyond the scope of this study.
D The set of fog devices in the network
T The matrix of traffic related to services from users
4. Fog container placement to minimize sharing incentives
β usd The selection variable, which indicates whether a container of a
service s for a user u is on a device d or not
τus The traffic generated by a user u for a service s in a unit time 4.1. Considerations
Wd The workload capacity of a device d
wsa The workload per traffic of the application container of a service s There are several considerations in the placement of the fog
wv The workload per traffic of a VNF container
Nvs The number of VNFs of a service s
container by the fog manager. The first consideration is that even
Ncd The number of fog containers in a device d if the container is placed somewhere in the local network, the ef-
εdi The power efficiency of a device d for loading a container including fect of the quality of service (QoS) and user experience (UX) is not
the idle state significant. The most important features of fog computing that en-
εda The power efficiency of a device d for operating a container in the
hance UX are user location consideration and the reduction in la-
active state
εdn The power efficiency of a device d for transmitting data tency. A local network with a 16–24 bit subnet prefix is not ge-
ed The power consumption of a device d ographically broad, unlike a WAN or the internet. This means that
eˆd The owner power consumption of a device d the approximate location of the user can be determined, regardless
of the container placement in the local network. The other con-
sideration is that the round-trip time in the local network is not
long. The latency of the internet ranges from a few tens of millisec-
ipating users, he/she pays a fee to the portal. The portal pays the onds to a few seconds, where the latter can be perceived by users,
collected fees to participating users who have devices that provide whereas the latency of the local network is only a few millisec-
extra resources; the paid SIs are minimized by container placement onds. This means that even if the container is placed on the device
optimization, which is described later in the paper. In other words, farthest from the user in the local network topology, the user only
the SIs are paid with the collected fees from the NUs and EPUs needs to wait a few milliseconds to use the service, which is not
based on the leased computing resources. critical. Therefore, we have limited the container placement to be
Table 1 summarizes the incentive payment criteria for each user performed in only one local network, and interworking with neigh-
category. The NUs are not entitled to the PI and pay a fee to the boring networks will be covered in future work.
fog portal as they use the service. The EPUs can receive the PIs, Second, the network services are configured to support scala-
but they use a portion of resources of the SPUs and pay a fee for bility through flexible resource management within each local net-
overuse. The PPUs receive only the PIs, and they receive a revenue work. In the network, each service can operate across multiple de-
distribution from the fog portal, which is proportional to their av- vices, and flexible resource management is achieved based on the
erage resource usage. The SPUs can receive all incentives, and in execution of multiple identical containers. A container that pro-
the case of the PI, the SPUs can receive all incentives, and in the vides a major function of a service is called an application con-
case of the SI, the incentives are given in proportion to the usage tainer, that is, a service can consist of a plurality of application
of other users when the fog containers for other users operate in containers. In addition, a service can include network functions,
the devices of the SPUs. such as a database, load balancer, and firewall, according to its
The fog portal does not charge commission for resource rental operating policy. Each function is a virtualized network function
between the users, and only receives a profit for the reduced (VNF) instead of separate hardware, and each VNF operates as a
power consumption by optimizing the container placement by the Docker container. Since the VNF containers are closely related to
fog manager. That is, the portal can maximize its income by mini- the application containers and require rapid responses, the VNFs
mizing the total power consumption of the network. of the corresponding services must be operated on all fog devices
An incentive is typically in a monetary form, such as a fee re- in which the application container of a specific service operates.
duction or cash. In the case of the PI with a certain criteria, deter- Third, it is assumed that the participating users seek the SIs
mining this amount does not present any significant problems. The first. Under this assumption, all SPUs want to operate the maxi-
sum of the PIs for the OPEX savings of the infrastructure operators mum number of containers on their devices to obtain SIs while
cannot exceed the OPEX savings, and the profit distribution of the taking risks such as equipment operation cost and noise. The fog
portal cannot exceed the profit. However, in the case of the SI, the portal can maximize its revenues by optimizing container place-
proposed system only defined the resource usage and SIs as pro- ment to reduce overall power consumption while properly posi-
portional. That is, there is no mention of incentive payments for a tioning the containers on the power-efficient devices. Therefore,
particular unit of resource usage, which is directly related to the the fog manager must deploy the fog containers to reduce total
portal revenue and user participation. If the base incentive and fee power consumption and to minimize the amount of SIs.
W.-S. Kim, S.-H. Chung / Computer Networks 145 (2018) 76–88 83

4.2. System model work, considering the service usage by the users. The power con-
sumption of each device can be defined as follows:
The optimization design goal is to minimize the SIs. The min-
ed = eid + ead + end , ∀d ∈ D (5)
imization of the SIs is directly linked to the maximization of the
revenue of the portal, which is achieved by minimizing the power The power consumption per unit time of the device is the sum
consumption of the workload while maintaining network usage at of the base power consumption of the containers, power consump-
such a level that congestion does not occur. The participating users tion of the containers in the active state, and power consumption
connect various types of fog devices, and the power efficiency, stor- of networking. First, to formulate the power consumed by the idle
age capacity, and acceptable workload of each device differ. state containers in the device, we need to calculate the number of
The fog service operates in a container form across multiple containers in each device, as follows:
fog devices in the network. Each container consumes the work-  
load for processing each request from the users across several de-
Ncd = min 1, βusd Nvs + βusd , ∀d ∈ D (6)
vices. A single request from one user does not have to be processed
s∈S u∈U s∈S u∈U
by multiple containers; thus, the single service request from the
user is directed to a specific container in a certain device. In other The number of application containers in the device is equivalent
words, the user can use several services, but the traffic for the sin- to the number of users using the service operated by the device.
gle service of the user is processed in only one container. The se- The VNF container exists in the number of VNFs of each service
lection variable, which indicates that the traffic for service s gener- operated in the corresponding device.
ated by user u is directed to device d, can be expressed as: The power consumption of the idle state containers can now be
defined as follows:
βusd ∈ {0, 1}, ∀u ∈ U, s ∈ S, d ∈ D (1)
eid = εdi Ncd , ∀d ∈ D (7)
If β usd is set to 1, the traffic related to service s from user u Note that the device type depends on the user selection, so the
should be sent to device d; if β usd is zero, it should not be sent. power efficiency of the installed device differs. Moreover, the num-
The workload refers to the average amount of the CPU, mem- ber of application containers depends on the number of requests
ory, and I/O work that a particular container uses per unit time. by the users. In addition, the power consumed by the active mode
In this paper, we express it as a normalized value to avoid exces- containers and the power consumption for networking can be de-
sive complexity. Other studies represent the resources used by pro- fined, respectively, as follows:
cesses as workloads [18]. As described above, a service consists of 
VNF containers used by a service and the application containers ead = εda βusd τus (Nvs wv + wsa ), ∀d ∈ D (8)
in one device. The workload of an application container differs ac- s∈S u∈U
cording to the type of service. For example, if a service only stores 
data, there is a low workload; if a service performs CPU-intensive end = εdn βusd τus , ∀d ∈ D (9)
s∈S u∈U
real-time video processing, the workload will be quite high. There-
fore, the workload of an application container is different for each If there is no control by the fog manager, the users run the con-
service and can be expressed as a constant value wsa according to tainers of the services, which they use, on their device. The owner
the network input data amount. In contrast, in the case of a VNF power consumption of device d can be expressed as follows:
container, the workload for input data amount can be represented
by a constant value wv since there is no significant difference be-
eˆd = eˆid + eˆad + eˆnd , ∀d ∈ D (10)
tween the services. As in (5), this consists of three kinds of the power consumption.
The traffic element τus represents the average traffic per unit However, the selection constant βˆusd used to calculate the owner
time that user u communicates with the container of service s in power consumption is set in advance so that the containers of the
the network. The selection variable should not be set to zero for services used by the users operate within their own device. There-
a particular user who generates traffic for a particular service. In fore, the components of (10) are defined as follows:
addition, a certain service, which communicates with the user, op- 
erates the application container on only one device; thus, the fol- eˆid = εdi βˆusd (Nvs + 1), ∀d ∈ D (11)
s∈S u∈U
lowing constraints are defined:

τus  eˆad = εda βˆusd τus (Nvs wv + wsa ), ∀d ∈ D (12)
≤ β , ∀u ∈ U, s ∈ S (2)
T  d∈D usd s∈S u∈U

eˆnd = εdn βˆusd τus , ∀d ∈ D (13)

βusd ≤ 1, ∀u ∈ U, s ∈ S (3) s∈S u∈U

d∈D Now, the SI can be defined based on the power consumption

according to the workload. The participating users are classified as
For all services, the sum of workloads due to the traffic gen- the EPU, PPU, and SPU, and the SIs paid to the SPUs can be defined
erated by all users for a particular device should be less than the as follows:
workload assigned to that device by the assignment variable. Thus,  
the following relationship holds. θu = max 0, ed − eˆd , ∀{u, d} ∈ U × D (14)
 It is assumed that device d has a one-to-one relationship with
βusd τus (Nvs wv + wsa ) ≤ Wd , ∀d ∈ D (4)
user u. If the power consumption of device d by the container
s∈S u∈U
placement is larger than the owner power consumption, the dif-
ference is referred to as the SI. Otherwise, the SI becomes zero.
4.3. Problem formulation The other user classes should now be considered. As described
above, the PPU is a user who wishes the power beyond his/her
In the proposed model, the fog manager places the containers own usage to not be consumed at his/her device. Intuitively, it
to minimize the power consumption of fog computing in the net- seems reasonable that the device of the PPU is not included in the
84 W.-S. Kim, S.-H. Chung / Computer Networks 145 (2018) 76–88

container placement process. However, it may be more reasonable λ1u , λ2u ∈ {0, 1}, ∀{u, d} ∈ U × D (26)
to include this device in the process, but to place the containers
so that it does not exceed the owner power consumption of this The optimization problem (17) can be redefined as an integer
user. To take this into account, tolerance constant κ u is defined as linear programming problem as follows:

follows: min θu
 u∈U (27)
∞, if u ∈ USPU s.t. (1 ) − (4 ), (16 ), (19 ) − (26 )
κu = , ∀u ∈ U (15)
1, if u ∈ UPPU
4.4. Examples of container placement
The relationship between the owner power consumption and
the actual power consumption based on the tolerance constant can
Table 3 shows three examples of container placement. In the
be defined as the following constraints:
examples, there are three SPUs and two NUs, and the service us-
ed ≤ κu eˆd + m , ∀κu ∈ [1, ∞], m > 0, {u, d} ∈ U × D (16) age and workload capacity of the registered device is shown in the
table. Optimization 1 and 2 do not consider the SI. Optimization 1
m is a very small positive number greater than zero; it was added deployed the container to minimize power consumption, the op-
to define this relationship when the owner power consumption of timization 2 deployed the container so that the user usage and
the SPU is zero. That is, if the tolerance constant of the PPU is set power consumption were proportional to a certain degree, and op-
to one, the containers are not placed on his/her device excessively timization 3 deployed the container to minimize the SI.
so that the SI will not occur. In addition, the tolerance constant of The total resource usage of the optimization 1 result is the low-
the SPUs is infinite to reflect the tendency to want to place the est compared to the other optimization results. This means that
maximum number of containers on his/her device. the system used the lowest power compared to the other opti-
In the case of the EPU, there are no additional constraints to be mizations to handle the same workload, indicating that the con-
defined. They are treated as the users who use network services tainer placement reduced the overall power consumption in the
that cannot be processed by using all of the available workloads network. According to the result of optimization 1, the user C de-
of their devices. In other words, the owner power consumption is vice with the best power efficiency consumed 90 W; thus, the 90
always larger than the actual power consumption of the device of SIs should be paid to user C, while a total of 30 fees was collected
the user. The fees for the excess usage is computed separately from from users D and E. As a result, the fog portal paid additional SIs
the objective function associated with the SI and therefore need in the network to incur losses. It should be noted that users A and
not be considered in the optimization problem. NUs can be treated B did not pay the usage fee, despite having higher service usages
the same as other participating users, but the workload capacity of than the power consumption of their devices. This is merely an in-
the device of the user is always zero. dividual benefit from the container placement, so there is no basis
The optimization problem is mathematically formulated as fol- for collecting fees from the users concerned. As such, if the con-
lows: tainers are placed to minimize power consumption, there may be

min θu a situation in which the fog portal suffers losses.
u∈U (17) Optimization 2 minimized the power consumption while main-
s.t. (1 ), (2 ), (3 ), (4 ), (16 ) taining it a certain proportion of the user service usage. Although
the total power consumption increased compared to that of opti-
Eq. (17) is a mixed-integer non-linear programming (MINLP)
mization 1, the portal did not suffer losses from the SI manage-
problem, requiring variable relaxation [38]. The objective function
ment. This is because it considered the service usage of the SPU,
has two minimization functions, and it should be linearized. For-
and thus the SIs to be paid to the SPUs reduced significantly. How-
tunately, the linearization is simply achieved by using an auxiliary
ever, no container is operated on the user C device with the best
variable and the big-M. The functions are linearized as follows:
power efficiency because user C did not use any services. Opti-
 mization 3 placed the container to minimize the SI. The portal
δsd = min 1, βusd , ∀s ∈ S, d ∈ D (18) benefits from the SI management because the collection fees are
u∈U higher than the SIs.

δsd ≥ 1 − Mλ1sd , ∀s ∈ S, d ∈ D (19) 5. Simulation results and performance evaluation

In this paper, we propose the incentive-based, user-

δsd ≥ u
βusd − Mλ2sd , ∀s ∈ S, d ∈ D (20) participatory fog computing architecture and a container place-
ment scheme that minimizes the sharing incentives. In this section,
we analyze the simulation results to evaluate the proposed fog
λ1sd + λ2sd = 1, ∀s ∈ S, d ∈ D (21) container placement scheme. The simulation was performed in
comparison with the container placement scheme to minimize
power consumption, and the power consumption minimization
λ1sd , λ2sd ∈ {0, 1}, ∀s ∈ S, d ∈ D (22) scheme considering owner usage.
The simulation parameters are shown in Table 4; some of these,
such as power efficiency and workloads, refer to [39–41]. The pa-
θu ≥ 0 − Mλ1u , ∀{u, d} ∈ U × D (23) rameters follow the table for each simulation unless otherwise
noted. The environment constants have arbitrary values within a
given parameter range for each simulation, and each result is aver-
θu ≥ ed − eˆd − Mλ2u , ∀{u, d} ∈ U × D (24) aged over at least 30 simulation results. The simulation was con-
ducted for an environment in which one fog manager carried out
container placement operations on a single local network.
λ1u + λ2u = 1, ∀{u, d} ∈ U × D (25) The main metric of the simulation is the total amount of SI,
which is the sum of the power consumption of each device, ex-
W.-S. Kim, S.-H. Chung / Computer Networks 145 (2018) 76–88 85

Table 3
Examples of three container placement results.

User User category Owner power Maximum device Provided power Optimization 1 Optimization 2 Optimization 3
consumption (W) power PC SI PC SI PC SI

A SPU 30 80 50 0 0 20 0 30 0
B SPU 60 100 40 0 0 90 30 60 0
C SPU 0 100 100 90 90 0 0 25 25
D NU 10 0 −10 0 −10 0 −10 0 −10
E NU 20 0 −20 0 −20 0 −20 0 −20
Total 120 280 160 90 60 110 0 115 −5

Table 4
Simulation parameters.

Symbol Description Value Unit

Wd Workload capacity of the device 50 0 0 − 10, 0 0 0 Workload

wv VNF workload per traffic 0.1 Workload/kbit
wsa Application workload per traffic 1 − 100 Workload/kbit
εdi Power efficiency per container 0.3 − 0.7 J/(s × container)
εda Power efficiency per workload 0.6 − 1.3 × 10−2 J/workload
εdn Power efficiency per communication 0.5 − 2.0 × 10−5 J/kbit
Nvs The number of VNF containers per services 1−5 EA
|U | The number of users 10
|UPPU | The number of PPUs 2
|UNU | The number of NUs 2
WR Total workload to requested workload ratio 20 %

100 sumption considering owner usage, and the minimizing sharing in-
90 MPC centive (MSI) is the scheme for minimizing the amount of the SI
Total Incentives to be Paid

80 payment. The tolerance constant of the MPU was fixed at 1.2, and
70 this scheme places the container in such a way that it does not
60 convey unreasonable power consumption to the user when there
50 is no SI. In the simulation, there is no PPU, so when the number
of NUs increases from zero to nine, the number of SPUs decreases
from ten to one. In addition, the total workload to requested work-
load ratio was set at 20% and 50%, respectively.
The simulation results in Fig. 4(a) show that the SI payment
amount of the proposed MSI scheme is lower than that of the
0 1 2 3 4 5 6 7 8 9 other schemes in all cases. According to the results of the MSI
The Number of Non-Participating Users ((a) WR 20%) scheme, the amount of SI payment is increased until the number
of NU is increased from zero to seven, and then decreased. The ini-
MPC tial increase is because more workloads need to be processed on
Total Incentives to be Paid

140 MPU the SPU device owing to the increase in the number of NUs. The
120 MSI decrease in the latter is due to the total requested workload be-
ing reduced by the decrease in the number of SPUs, while the WR
is fixed at 20%. The MPU scheme provides an additional 11.88 in-
80 centives on average than the MSI until the number of NUs reaches
60 seven. Because the MPU has the tolerance constant of 1.2 consid-
40 ering the owner usage, it places the containers to consume less
power than the SPU owner power consumption for almost all de-
20 vices. In other words, the power consumption due to the workload
0 is added to the device with good power efficiency at a level of less
0 1 2 3 4 5 6 7 8 9
than 120% of the owner usage, and an addition of 20% leads to
The Number of Non-Participating Users ((b) WR 50%)
an increase in the SI payment amount. In contrast, the MSI tries
Fig. 4. Amount of the total SI payment of each container placement scheme ac- to avoid an unnecessary increase in the SI amount by placing the
cording to the number of NUs; (a) WR 20%, (b) WR 50%. containers to consume as much power as the owner power con-
sumption of all SPUs.
The MPC places most containers on one or two power-efficient
cluding the owner power consumption. Negative SIs are not con- devices provided there is workload capacity. In this process, the
sidered for the SPU, and the SIs are not paid for the PPU. Since NU power consumption of most SPU devices is zero; however, the
has zero available workload of its own device, the owner power power consumption of a particular device will rise to its maxi-
consumption is charged as is. mum. Obviously, this leads to a significant increase in the amount
Fig. 4 shows the SI payment amount of each placement scheme of SI payment. The SI payment in the MPC scheme decreases as the
according to the change in the number of NUs while the to- number of SPUs decreases over the entire case, because the total
tal number of users is fixed at ten. In the legend, the mini- request workload is reduced owing to the decrease in the number
mizing power consumption (MPC) is the scheme for minimizing of SPU devices. When there is one SPU, the SI amount of the three
power consumption, the minimizing power consumption consider- optimization results is equal. The result of Fig. 4(b) shows that the
ing owner usage (MPU) is the scheme for minimizing power con-
86 W.-S. Kim, S.-H. Chung / Computer Networks 145 (2018) 76–88

40 140 120
Break-Even Point 104.6711
20 120
Revenue of the fog portal

0 100

Collected Fees

Total Incentives
-20 80

-40 60 60

-60 40
-80 20
-100 0
0 1 2 3 4 5 6 7 8 9 0
The Number of Non-Participating Users 0
Fig. 7. Total sharing incentives paid in each container placement scheme when only
Fig. 5. Comparison of each container placement scheme in terms of the revenue the SPUs exist in the network.
with 20% WR.
90 usage MPC

Power Consumption (W)

Total Incentives to be Paid

70 50

60 MPC 40
50 MPU
20 10
10 0
0 u1 (NU) u2 (NU) u3 (NU) u4 (NU) u5 (PPU) u6 (PPU)
0 1 2 3 4 5 6 7 Users
The Number of Private Participating Users
Fig. 6. Comparison of each container placement scheme in terms of the revenue
Power Consumption (W)

according to the number of PPUs.

total amount of SI is increased compared to that of Fig. 4(a) as the
requested workload is high. The other parts are similar to the sim- 30
ulation results of WR of 20%.
Fig. 5 shows the comparison of the three optimization results 20
in terms of revenue of the portal, with the WR fixed at 20%. The 10
revenue of the portal is the fees from the NUs, represented by a
dashed line in Fig. 5 minus the SI payment in Fig. 4(a). First, in 0
u7 (SPU) u8 (SPU) u9 (SPU) u10 (SPU) u11 (SPU) u12 (SPU)
the case of the MPC scheme, the portal suffers a loss for the entire Users
case. This occurs because the containers are placed with no con-
sideration of the SIs. The MPU scheme only generates revenue for Opt. Schemes MPC SPC MSI
the portal when the number of NUs is from two to five, and in the Total PC (W) 127.46 179.43 194.20
other case it incurs losses. The proposed MSI scheme does not gen-
Incentives 86.45 25.37 17.00
erate any income or expenditure when there is no NU, and the rev-
enue is generated in all cases except where there are eight or nine Fig. 8. Power consumption by user device according to each optimization scheme.
NUs thereafter. The simulation uses the WR, which determines the
workload generated based on the total available workload in the
network. Therefore, if the number of SPUs becomes smaller than participating users, the amount of the SI is not equal when there
the number of NUs, the average service usage of the NUs is also is one SPU. This is because the containers of network services that
reduced, resulting in a loss in the latter half of the simulation. Re- the NUs use can be deployed somewhere in the devices of the
gardless of the available workloads, if all users randomly generate PPUs and SPU.
traffic, the revenue of the portal increases for every scheme, as the Fig. 7 shows the SI payment amount of each optimization
collected fees increases. scheme when there is no NU, EPU, and PPU; in other words, only
Fig. 6 shows the amount of SI payment for each placement the SPUs exist. As a result of simulation in an environment with
scheme as a function of the number of PPUs changed. The total random traffic, the SI payment of the MSI is zero, the MPU is
number of users and the number of NUs was fixed to ten and two, 26.64, and the MPC is 104.67. The amount of SI payment of the
respectively, and the number of PPUs increases from zero to seven. MSI is clearly lower than that of other schemes. This implies that
Therefore, the number of SPUs decreases from eight to one. As the MSI-based container placement does not incur any loss to the
shown in the previous simulation results, as the number of SPUs fog portal when there is no situation in which the NU or the like
decreases, the amount of SI payment in the MPC decreases and the leases resources from other users.
payment amount of the MPU and MSI schemes increases. However, Fig. 8 shows the power consumption per user according to
in contrast to the previous simulation, since the PPU is among the the container placement by each optimization when user usage is
W.-S. Kim, S.-H. Chung / Computer Networks 145 (2018) 76–88 87

fixed. In the simulation, the total number of users is 12, the NU Acknowledgement
is four, and the PPU is two. It shows the owner power consump-
tion, and the power consumption according to the MPC, MPU, and This work was supported by the National Research Foundation
MSI, from the left to right. At this time, the owner power con- of Korea (NRF) grant funded by the Korea government(MSIT) (No.
sumption of the NU is estimated based on the average value of NRF-2018R1D1A1B07049355)
the device specifications of the other participating users because
the calculation cannot be performed as they have no devices. The
MPC intensively placed the containers on the power-efficient de- References
vices of users 10, 11, and 12, regardless of the usage amount of the
[1] F. Bonomi, R. Milito, J. Zhu, S. Addepalli, Fog computing and its role in the
participating users. Since the MPU considers owner usage, the con-
Internet of Things, in: Proc. ACM MCC, 2012, pp. 13–15.
tainers are placed to consume power similar to the owner power [2] S. Yi, C. Li, Q. Li, A survey of fog computing: concepts, applications and issues,
consumption. It should be noted that, although the performance in: Proc. ACM Mobidata, June 2015, pp. 37–42.
of the device of user 12 is quite good, the owner usage of the [3] S. Yi, Z. Hao, Z. Qin, Q. Li, Fog computing: platform and applications, in: Proc.
3rd IEEE Workshop on Hot Topics in Web Systems and Technologies (HotWeb),
user is zero, so that the containers have been not placed in the November 2015, pp. 73–78.
device. [4] M. Yannuzzi, R. Milito, R. Serral-Gracià, D. Montero, M. Nemirovsky, Key ingre-
Finally, as a result of the container placement based on the MSI, dients in an IoT recipe: fog computing, cloud computing, and more Fog Com-
puting, in: Proc. IEEE CAMAD, December 2014, pp. 325–329.
the containers were placed so that only users 10 and 12 exceeded [5] L.M. Vaquero, L. Rodero-Merino, Finding your way in the fog: towards a com-
the owner power consumption. Other containers were placed so prehensive definition of fog computing, ACM SIGCOMM Comput. Commun.
that the power consumption according to the result of the con- Rev. 44 (October (5)) (2014) 27–32.
[6] T.H. Luan et al., Fog computing: focusing on mobile users at the edge, arXiv
tainer placement process in the devices of the remaining users was preprint arXiv:1502.01815v3, March 2016.
as close as possible to the owner power consumption, but did not [7] T.N. Gia, et al., Fog computing in healthcare Internet of Things: a case study
exceed it. As a result of the MPC, MPU, and MSI container place- on ECG feature extraction, in: Proc. IEEE CIT, October 2015, pp. 356–363.
[8] H. Dubey, et al., Fog data: enhancing telehealth big data through fog comput-
ment, the total power consumption of all devices in the network
ing, in: Proc. ASE BD&SI, October 2015, pp. 1–6.
was 127.46, 179.43, 194.20 W, respectively. The total power con- [9] N.B. Truong, G.M. Lee, Y. Ghamri-Doudane, Software defined networking-based
sumption by the MPC was the lowest and that of MSI was the vehicular adhoc network with fog computing, in: Proc IFIP/IEEE IM, May. 2015,
pp. 1202–1207.
highest. In contrast, the SI payment showed 86.45, 23.37, and 17.00,
[10] M. Chiang, T. Zhang, Fog and IoT: an overview of research opportunities, IEEE
respectively, and the collected fees was 56.13. Internet Things J. 3 (December (6)) (2016) 854–864.
[11] A.V. Dastjerdi, R. Buyya, Fog computing: helping the internet of things realize
its potential, Computer 49 (August (8)) (2016) 112–116.
[12] E. Marín-Tordera, X. Masip-Bruin, J. García-Almiñana, A. Jukan, G. Ren, J. Zhu,
6. Conclusions and future work Do we all really know what a fog node is? Current trends towards an open
definition, Comput. Commun. 109 (2017) 117–130 September.
Although fog computing is recognized as a computing model [13] M. Satyanarayanan, P. Bahl, R. Caceres, N. Davies, The case for VM-based
cloudlets in mobile computing, IEEE Pervasive Comput. 8 (October (4)) (2009)
for the IoT era, it is still not widely used. This is attributed pre- 14–23.
dominantly to the uncertainty regarding both the massive replace- [14] F. Büsching, S. Schildt, L. Wolf, DroidCluster: towards smartphone cluster com-
ment of network equipment and the infrastructure operators. In puting – the streets are paved with potential computer clusters, in: Proc. ICD-
CSW, Jun., 2012, pp. 114–117.
this paper, we propose an incentive based, user-participatory fog [15] S. Bitam, A. Mellouk, ITS-cloud: cloud computing for intelligent transportation
computing architecture based on the fog portal and its associ- system, in: Proc. IEEE GLOBECOM, December 2012, pp. 2054–2059.
ated container placement scheme, so as to enhance the feasibility, [16] What is Docker, 2017. [Online]. Available (Accessed 3 September) https://www.
performance, and profitability of fog computing. In the proposed [17] S.H. Newaz, Towards realizing the importance of placing fog computing fa-
model, the user connects the purchased fog device to the net- cilities at the central office of a PON, in: Proc. IEEE ICACT, February 2017,
work, e.g., at home or in the office, and registers the device in the pp. 152–157.
[18] A. Lewis, S. Ghosh, N. Tzeng, Run-time energy consumption estimation based
fog portal. The fog portal performs an intermediary role between on workload in server systems, in: Proc. USENIX HotPower, December 2008,
users and network service operators. Participating users are moti- pp. 1–5.
vated by incentives and receive more revenue by providing other [19] D. Economou, S. Rivoire, C. Kozyrakis, P. Ranganathan, Full-system power anal-
ysis and modeling for server environments, in: Proc. IEEE MoBS, June 2006,
users with their device resources. Moreover, the fog portal medi-
pp. 1–8.
ates the sharing incentives between the participating users who [20] S. Yan, M. Peng, W. Wang, User access mode selection in fog computing based
provide the resources and the other users who use the resources. radio access networks, in: Proc. IEEE ICC, May. 2016, pp. 1–6.
The fog manager performs container placement on the local net- [21] M. Peng, S. Yan, K. Zhang, C. Wang, Fog-computing-based radio access net-
works: issues and challenges, IEEE Netw. 30 (July (4)) (2016) 46–53.
work and places the containers to minimize the sharing incentives [22] B. Tang, et al., A hierarchical distributed fog computing architecture for big
paid. The proposed container placement method is compared with data analysis in smart cities, in: Proc. ASE BD&SI, October 2015, pp. 1–6.
the power consumption minimization scheme and the power con- [23] T. Nishio, R. Shinkuma, T. Takahashi, N.B. Mandayam, Service-oriented hetero-
geneous resource sharing for optimizing service latency in mobile cloud, in:
sumption minimization considering the owner usage through sim- Proc. ACM MobileCloud, July 2013, pp. 19–26.
ulation. [24] K. Hong, D. Lillethun, U. Ramachandran, B. Ottenwälder, B. Koldehofe, Mo-
In terms of future work, research into more efficient manage- bile fog: a programming model for large-scale applications on the Internet of
Things, in: Proc. ACM MCC, August 2013, pp. 15–20.
ment techniques is required through collaboration with neighbor- [25] S. Sarkar, S. Chatterjee, S. Misra, Assessment of the suitability of fog comput-
ing local networks, rather than with standalone operations. Net- ing in the context of Internet of Things, IEEE Trans. Cloud Comput. 6 (Jan-
work collaboration for user participatory fog computing could cre- uary/March (1)) (2018) 46–59.
[26] C.T. Do, N.H. Tran, C. Pham, M.G.R. Alam, J.H. Son, C.S. Hong, A proximal algo-
ate additional research challenges, such as routing changes, pro-
rithm for joint resource allocation and minimizing carbon footprint in geo-dis-
tocols between SDN controllers, and information sharing between tributed fog computing, in: Proc. IEEE ICOIN, January 2015, pp. 324–329.
fog managers. In addition, it is necessary to build a more sophis- [27] F. Jalali, K. Hinton, R. Ayre, T. Alpcan, R.S. Tucker, Fog computing may help to
save energy in cloud computing, IEEE J. Sel. Areas Commun. vol. 34 (May (5))
ticated model, in terms of workload, of the power consumption
versus workload to improve the efficiency of the power consump- [28] H. Yao, C. Bai, D. Zeng, Q. Liang, Y. Fan, Migrate or not? Exploring virtual ma-
tion prediction. Finally, to mitigate the complexity of the proposed chine migration in roadside cloudlet-based vehicular cloud, Concurr. Computa-
optimization scheme, distributed convex optimization, such as the tion: Pract. Exp. 27 (December (18)) (2015) 5780–5792.
[29] R. Deng, R. Lu, C. Lai, T.H. Luan, Towards power consumption-delay tradeoff
alternating direction method of multipliers (ADMM), needs to be by workload allocation in cloud-fog computing, in: Proc. IEEE ICC, June 2015,
applied [42–43]. pp. 3909–3914.
88 W.-S. Kim, S.-H. Chung / Computer Networks 145 (2018) 76–88

Won-Suk Kim received the B.E., M.S., and Ph.D. degrees

[30] E. Baccarelli, P.G.V. Naranjo, M. Scarpiniti, M. Shojafar, J.H. Abawajy, Fog of ev-
in electrical and computer engineering from Pusan Na-
erything: Energy-efficient networked computing architectures, research chal-
tional University, Busan, South Korea, in 2010, 2012, and
lenges, and a case study, IEEE Access, 5, 2017.
2017, respectively. Since 2017, he has been a Post-Doctoral
[31] Openfog reference architecture, accessed on June 07, 2017. [Online]. Available
Research Fellow with Pusan National University, where he
is involved in the BK21 Plus Creative Human Resource
[32] X. Masip-Bruin, E. Marín-Tordera, G. Tashakor, A. Jukan, G.J. Ren, Foggy clouds
Development Program. He is currently a Research Fel-
and cloudy fogs: a real need for coordinated management of fog-to-cloud com-
low with the Dong-Nam Grand ICT Research Center. Dr.
puting systems, IEEE Wirel. Commun. vol. 23 (October (5)) (2016).
Kim was a recipient of the Minister’s Award, the high-
[33] H. Lee, A. Nakao, A study of user incentive mechanism in named data net-
est award in creative talent evaluation, in 2017. His re-
working, in: Proc IEEE COMSNETS, January 2015, pp. 1–8.
search interests include wireless mesh networks, enter-
[34] Y. Li, B. Shen, J. Zhang, X. Gan, J. Wang, X. Wang, Offloading in HCNs: con-
prise wireless local area networks, software-defined net-
gestion-aware network selection and user incentive design, IEEE Trans. Wirel.
working, fog computing, and augmented reality.
Commun. vol. 16 (October (10)) (2017) 6479–6492.
[35] H.R. Arkian, A. Diyanat, A. Pourkhalili, MIST: fog-based data analytics scheme
with cost-efficient resource provisioning for IoT crowdsensing applications, J.
Netw. Comput. Appl. vol. 82 (March (C)) (2017) 152–165. Sang-Hwa Chung received the B.S. degree in electrical
[36] M.K. Gupta, T. Amgoth, Resource-aware algorithm for virtual machine place- engineering from Seoul National University, Seoul, South
ment in cloud environment, in: Proc. IEEE IC3, August 2016, pp. 1–6. Korea, in 1985, the M.S. degree in computer engineer-
[37] W. Kim, S. Chung, User-participatory fog computing Architecture and its man- ing from Iowa State University, Ames, IA, USA, in 1988,
agement schemes for improving feasibility, IEEE Access vol. 6 (March 2018) and the Ph.D. degree in computer engineering from the
20262–20278. University of Southern California, Los Angeles, CA, USA,
[38] M.R. Garey, D.S. Johnson, in: Computers and Intractability; a Guide to the The- in 1993. From 1993 to 1994, he was an Assistant Profes-
ory of NP-Completeness, Freeman, New York, 1979, pp. 245–248. sor with the Department of Electrical and Computer En-
[39] P. Mahadevan, P. Sharma, S. Banerjee, P. Ranganathan, A power benchmark- gineering, University of Central Florida, Orlando, FL, USA.
ing framework for network devices, in: Proc. NETWORKING, May. 2009, He is currently a Professor with the Computer Engineer-
pp. 795–808. ing Department, Pusan National University, Busan. Since
[40] D. Meisner, B.T. Gold, T.F. Wenisch, PowerNap: eliminating server idle power, 2016, he has been the Director of Dong-Nam Grand ICT
ACM ASPLOS (March 2009) 205–216. Research Center. Dr. Chung received the Best Paper Award
[41] R. Morabito, Power consumption of virtualization technologies: an empirical from ETRI Journal in 2010 and the Engineering Paper Award from Pusan National
investigation, IEEE/ACM UCC (December 2015) 522–527. University in 2011. In 2017, he was selected as an Excellent Research Professor of
[42] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed optimization and the computer engineering at Pusan National University. His research interests are in
statistical learning via the alternating direction method of multipliers, Found. the areas of embedded systems, wireless networks, software-defined networking,
Trends® Mach. Learn. vol. 3 (January (1)) (2011) 1–122. and smart factory.
[43] M. Zhu, S. Martínez, On distributed convex optimization under inequality
and equality constraints, IEEE Trans. Autom. Control 57 (January (1)) (2012)